00:00:00.001 Started by upstream project "autotest-per-patch" build number 132530 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.147 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.213 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.213 > git --version # timeout=10 00:00:00.270 > git --version # 'git version 2.39.2' 00:00:00.270 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.310 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.310 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.436 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.451 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.465 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.465 > git config core.sparsecheckout # timeout=10 00:00:07.480 > git read-tree -mu HEAD # timeout=10 00:00:07.500 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.530 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.531 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.656 [Pipeline] Start of Pipeline 00:00:07.672 [Pipeline] library 00:00:07.674 Loading library shm_lib@master 00:00:07.674 Library shm_lib@master is cached. Copying from home. 00:00:07.692 [Pipeline] node 00:00:22.694 Still waiting to schedule task 00:00:22.695 Waiting for next available executor on ‘vagrant-vm-host’ 00:28:19.891 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_3 00:28:19.894 [Pipeline] { 00:28:19.913 [Pipeline] catchError 00:28:19.916 [Pipeline] { 00:28:19.931 [Pipeline] wrap 00:28:19.941 [Pipeline] { 00:28:19.952 [Pipeline] stage 00:28:19.955 [Pipeline] { (Prologue) 00:28:19.977 [Pipeline] echo 00:28:19.978 Node: VM-host-WFP1 00:28:19.985 [Pipeline] cleanWs 00:28:19.996 [WS-CLEANUP] Deleting project workspace... 00:28:19.996 [WS-CLEANUP] Deferred wipeout is used... 00:28:20.002 [WS-CLEANUP] done 00:28:20.273 [Pipeline] setCustomBuildProperty 00:28:20.362 [Pipeline] httpRequest 00:28:20.764 [Pipeline] echo 00:28:20.766 Sorcerer 10.211.164.101 is alive 00:28:20.779 [Pipeline] retry 00:28:20.781 [Pipeline] { 00:28:20.794 [Pipeline] httpRequest 00:28:20.798 HttpMethod: GET 00:28:20.799 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:28:20.799 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:28:20.800 Response Code: HTTP/1.1 200 OK 00:28:20.801 Success: Status code 200 is in the accepted range: 200,404 00:28:20.801 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:28:20.947 [Pipeline] } 00:28:20.965 [Pipeline] // retry 00:28:20.973 [Pipeline] sh 00:28:21.257 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:28:21.274 [Pipeline] httpRequest 00:28:21.660 [Pipeline] echo 00:28:21.662 Sorcerer 10.211.164.101 is alive 00:28:21.673 [Pipeline] retry 00:28:21.675 [Pipeline] { 00:28:21.692 [Pipeline] httpRequest 00:28:21.696 HttpMethod: GET 00:28:21.697 URL: http://10.211.164.101/packages/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:28:21.698 Sending request to url: http://10.211.164.101/packages/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:28:21.699 Response Code: HTTP/1.1 200 OK 00:28:21.699 Success: Status code 200 is in the accepted range: 200,404 00:28:21.700 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:28:23.972 [Pipeline] } 00:28:23.989 [Pipeline] // retry 00:28:23.997 [Pipeline] sh 00:28:24.278 + tar --no-same-owner -xf spdk_c86e5b1821f2ac77b97aa0d4f25d3c02e876cf47.tar.gz 00:28:26.821 [Pipeline] sh 00:28:27.102 + git -C spdk log --oneline -n5 00:28:27.102 c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:28:27.102 97329b16b bdev/malloc: malloc_done() uses switch-case for clean up 00:28:27.102 afdec00e1 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:28:27.102 b09de013a nvmf: Get metadata config by not bdev but bdev_desc 00:28:27.102 971ec0126 bdevperf: Add hide_metadata option 00:28:27.124 [Pipeline] writeFile 00:28:27.140 [Pipeline] sh 00:28:27.422 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:28:27.434 [Pipeline] sh 00:28:27.717 + cat autorun-spdk.conf 00:28:27.717 SPDK_RUN_FUNCTIONAL_TEST=1 00:28:27.717 SPDK_TEST_NVME=1 00:28:27.717 SPDK_TEST_FTL=1 00:28:27.717 SPDK_TEST_ISAL=1 00:28:27.717 SPDK_RUN_ASAN=1 00:28:27.717 SPDK_RUN_UBSAN=1 00:28:27.717 SPDK_TEST_XNVME=1 00:28:27.717 SPDK_TEST_NVME_FDP=1 00:28:27.717 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:28:27.724 RUN_NIGHTLY=0 00:28:27.726 [Pipeline] } 00:28:27.741 [Pipeline] // stage 00:28:27.758 [Pipeline] stage 00:28:27.759 [Pipeline] { (Run VM) 00:28:27.770 [Pipeline] sh 00:28:28.051 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:28:28.051 + echo 'Start stage prepare_nvme.sh' 00:28:28.051 Start stage prepare_nvme.sh 00:28:28.051 + [[ -n 3 ]] 00:28:28.051 + disk_prefix=ex3 00:28:28.051 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:28:28.051 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:28:28.051 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:28:28.051 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:28:28.051 ++ SPDK_TEST_NVME=1 00:28:28.051 ++ SPDK_TEST_FTL=1 00:28:28.051 ++ SPDK_TEST_ISAL=1 00:28:28.051 ++ SPDK_RUN_ASAN=1 00:28:28.051 ++ SPDK_RUN_UBSAN=1 00:28:28.051 ++ SPDK_TEST_XNVME=1 00:28:28.051 ++ SPDK_TEST_NVME_FDP=1 00:28:28.051 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:28:28.051 ++ RUN_NIGHTLY=0 00:28:28.051 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:28:28.051 + nvme_files=() 00:28:28.051 + declare -A nvme_files 00:28:28.051 + backend_dir=/var/lib/libvirt/images/backends 00:28:28.051 + nvme_files['nvme.img']=5G 00:28:28.051 + nvme_files['nvme-cmb.img']=5G 00:28:28.051 + nvme_files['nvme-multi0.img']=4G 00:28:28.051 + nvme_files['nvme-multi1.img']=4G 00:28:28.051 + nvme_files['nvme-multi2.img']=4G 00:28:28.051 + nvme_files['nvme-openstack.img']=8G 00:28:28.051 + nvme_files['nvme-zns.img']=5G 00:28:28.051 + (( SPDK_TEST_NVME_PMR == 1 )) 00:28:28.051 + (( SPDK_TEST_FTL == 1 )) 00:28:28.051 + nvme_files["nvme-ftl.img"]=6G 00:28:28.051 + (( SPDK_TEST_NVME_FDP == 1 )) 00:28:28.051 + nvme_files["nvme-fdp.img"]=1G 00:28:28.051 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:28:28.051 + for nvme in "${!nvme_files[@]}" 00:28:28.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:28:28.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:28:28.051 + for nvme in "${!nvme_files[@]}" 00:28:28.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:28:28.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:28:28.051 + for nvme in "${!nvme_files[@]}" 00:28:28.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:28:28.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:28:28.051 + for nvme in "${!nvme_files[@]}" 00:28:28.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:28:28.051 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:28:28.051 + for nvme in "${!nvme_files[@]}" 00:28:28.051 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:28:28.309 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:28:28.309 + for nvme in "${!nvme_files[@]}" 00:28:28.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:28:28.309 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:28:28.309 + for nvme in "${!nvme_files[@]}" 00:28:28.309 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:28:28.310 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:28:28.310 + for nvme in "${!nvme_files[@]}" 00:28:28.310 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:28:28.569 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:28:28.569 + for nvme in "${!nvme_files[@]}" 00:28:28.569 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:28:28.569 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:28:28.569 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:28:28.569 + echo 'End stage prepare_nvme.sh' 00:28:28.569 End stage prepare_nvme.sh 00:28:28.582 [Pipeline] sh 00:28:28.865 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:28:28.865 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:28:28.865 00:28:28.865 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:28:28.865 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:28:28.865 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:28:28.865 HELP=0 00:28:28.865 DRY_RUN=0 00:28:28.865 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:28:28.865 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:28:28.865 NVME_AUTO_CREATE=0 00:28:28.865 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:28:28.865 NVME_CMB=,,,, 00:28:28.865 NVME_PMR=,,,, 00:28:28.865 NVME_ZNS=,,,, 00:28:28.865 NVME_MS=true,,,, 00:28:28.865 NVME_FDP=,,,on, 00:28:28.865 SPDK_VAGRANT_DISTRO=fedora39 00:28:28.865 SPDK_VAGRANT_VMCPU=10 00:28:28.865 SPDK_VAGRANT_VMRAM=12288 00:28:28.865 SPDK_VAGRANT_PROVIDER=libvirt 00:28:28.865 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:28:28.865 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:28:28.865 SPDK_OPENSTACK_NETWORK=0 00:28:28.865 VAGRANT_PACKAGE_BOX=0 00:28:28.865 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:28:28.865 FORCE_DISTRO=true 00:28:28.865 VAGRANT_BOX_VERSION= 00:28:28.865 EXTRA_VAGRANTFILES= 00:28:28.865 NIC_MODEL=e1000 00:28:28.865 00:28:28.865 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:28:28.865 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:28:31.471 Bringing machine 'default' up with 'libvirt' provider... 00:28:32.848 ==> default: Creating image (snapshot of base box volume). 00:28:33.108 ==> default: Creating domain with the following settings... 00:28:33.108 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732641932_2bf69a2968e3e4516697 00:28:33.108 ==> default: -- Domain type: kvm 00:28:33.108 ==> default: -- Cpus: 10 00:28:33.108 ==> default: -- Feature: acpi 00:28:33.108 ==> default: -- Feature: apic 00:28:33.108 ==> default: -- Feature: pae 00:28:33.108 ==> default: -- Memory: 12288M 00:28:33.108 ==> default: -- Memory Backing: hugepages: 00:28:33.108 ==> default: -- Management MAC: 00:28:33.108 ==> default: -- Loader: 00:28:33.108 ==> default: -- Nvram: 00:28:33.108 ==> default: -- Base box: spdk/fedora39 00:28:33.108 ==> default: -- Storage pool: default 00:28:33.108 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732641932_2bf69a2968e3e4516697.img (20G) 00:28:33.108 ==> default: -- Volume Cache: default 00:28:33.108 ==> default: -- Kernel: 00:28:33.108 ==> default: -- Initrd: 00:28:33.108 ==> default: -- Graphics Type: vnc 00:28:33.108 ==> default: -- Graphics Port: -1 00:28:33.108 ==> default: -- Graphics IP: 127.0.0.1 00:28:33.108 ==> default: -- Graphics Password: Not defined 00:28:33.108 ==> default: -- Video Type: cirrus 00:28:33.108 ==> default: -- Video VRAM: 9216 00:28:33.108 ==> default: -- Sound Type: 00:28:33.108 ==> default: -- Keymap: en-us 00:28:33.108 ==> default: -- TPM Path: 00:28:33.108 ==> default: -- INPUT: type=mouse, bus=ps2 00:28:33.108 ==> default: -- Command line args: 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:28:33.108 ==> default: -> value=-drive, 00:28:33.108 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:28:33.108 ==> default: -> value=-device, 00:28:33.108 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:28:33.367 ==> default: Creating shared folders metadata... 00:28:33.367 ==> default: Starting domain. 00:28:35.902 ==> default: Waiting for domain to get an IP address... 00:28:54.011 ==> default: Waiting for SSH to become available... 00:28:54.011 ==> default: Configuring and enabling network interfaces... 00:28:58.229 default: SSH address: 192.168.121.172:22 00:28:58.229 default: SSH username: vagrant 00:28:58.229 default: SSH auth method: private key 00:29:00.763 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:29:08.883 ==> default: Mounting SSHFS shared folder... 00:29:11.416 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:29:11.416 ==> default: Checking Mount.. 00:29:12.809 ==> default: Folder Successfully Mounted! 00:29:12.809 ==> default: Running provisioner: file... 00:29:13.744 default: ~/.gitconfig => .gitconfig 00:29:14.311 00:29:14.311 SUCCESS! 00:29:14.311 00:29:14.311 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:29:14.311 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:29:14.311 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:29:14.311 00:29:14.320 [Pipeline] } 00:29:14.335 [Pipeline] // stage 00:29:14.345 [Pipeline] dir 00:29:14.346 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:29:14.348 [Pipeline] { 00:29:14.360 [Pipeline] catchError 00:29:14.362 [Pipeline] { 00:29:14.376 [Pipeline] sh 00:29:14.658 + vagrant ssh-config --host vagrant 00:29:14.658 + sed -ne /^Host/,$p 00:29:14.658 + tee ssh_conf 00:29:17.980 Host vagrant 00:29:17.980 HostName 192.168.121.172 00:29:17.980 User vagrant 00:29:17.980 Port 22 00:29:17.980 UserKnownHostsFile /dev/null 00:29:17.980 StrictHostKeyChecking no 00:29:17.980 PasswordAuthentication no 00:29:17.980 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:29:17.980 IdentitiesOnly yes 00:29:17.980 LogLevel FATAL 00:29:17.980 ForwardAgent yes 00:29:17.980 ForwardX11 yes 00:29:17.980 00:29:17.993 [Pipeline] withEnv 00:29:17.996 [Pipeline] { 00:29:18.010 [Pipeline] sh 00:29:18.290 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:29:18.290 source /etc/os-release 00:29:18.290 [[ -e /image.version ]] && img=$(< /image.version) 00:29:18.290 # Minimal, systemd-like check. 00:29:18.290 if [[ -e /.dockerenv ]]; then 00:29:18.290 # Clear garbage from the node's name: 00:29:18.290 # agt-er_autotest_547-896 -> autotest_547-896 00:29:18.290 # $HOSTNAME is the actual container id 00:29:18.290 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:29:18.290 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:29:18.290 # We can assume this is a mount from a host where container is running, 00:29:18.290 # so fetch its hostname to easily identify the target swarm worker. 00:29:18.290 container="$(< /etc/hostname) ($agent)" 00:29:18.290 else 00:29:18.290 # Fallback 00:29:18.290 container=$agent 00:29:18.290 fi 00:29:18.290 fi 00:29:18.290 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:29:18.290 00:29:18.560 [Pipeline] } 00:29:18.576 [Pipeline] // withEnv 00:29:18.585 [Pipeline] setCustomBuildProperty 00:29:18.600 [Pipeline] stage 00:29:18.603 [Pipeline] { (Tests) 00:29:18.620 [Pipeline] sh 00:29:18.900 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:29:19.173 [Pipeline] sh 00:29:19.475 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:29:19.749 [Pipeline] timeout 00:29:19.749 Timeout set to expire in 50 min 00:29:19.751 [Pipeline] { 00:29:19.767 [Pipeline] sh 00:29:20.048 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:29:20.615 HEAD is now at c86e5b182 bdev/malloc: Extract internal of verify_pi() for code reuse 00:29:20.627 [Pipeline] sh 00:29:20.907 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:29:21.208 [Pipeline] sh 00:29:21.493 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:29:21.769 [Pipeline] sh 00:29:22.050 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:29:22.308 ++ readlink -f spdk_repo 00:29:22.308 + DIR_ROOT=/home/vagrant/spdk_repo 00:29:22.308 + [[ -n /home/vagrant/spdk_repo ]] 00:29:22.308 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:29:22.308 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:29:22.308 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:29:22.308 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:29:22.308 + [[ -d /home/vagrant/spdk_repo/output ]] 00:29:22.308 + [[ nvme-vg-autotest == pkgdep-* ]] 00:29:22.308 + cd /home/vagrant/spdk_repo 00:29:22.308 + source /etc/os-release 00:29:22.308 ++ NAME='Fedora Linux' 00:29:22.308 ++ VERSION='39 (Cloud Edition)' 00:29:22.308 ++ ID=fedora 00:29:22.308 ++ VERSION_ID=39 00:29:22.308 ++ VERSION_CODENAME= 00:29:22.308 ++ PLATFORM_ID=platform:f39 00:29:22.308 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:29:22.308 ++ ANSI_COLOR='0;38;2;60;110;180' 00:29:22.308 ++ LOGO=fedora-logo-icon 00:29:22.308 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:29:22.308 ++ HOME_URL=https://fedoraproject.org/ 00:29:22.308 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:29:22.308 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:29:22.308 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:29:22.308 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:29:22.308 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:29:22.308 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:29:22.308 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:29:22.308 ++ SUPPORT_END=2024-11-12 00:29:22.308 ++ VARIANT='Cloud Edition' 00:29:22.308 ++ VARIANT_ID=cloud 00:29:22.308 + uname -a 00:29:22.308 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:29:22.308 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:29:22.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:23.182 Hugepages 00:29:23.182 node hugesize free / total 00:29:23.182 node0 1048576kB 0 / 0 00:29:23.182 node0 2048kB 0 / 0 00:29:23.182 00:29:23.182 Type BDF Vendor Device NUMA Driver Device Block devices 00:29:23.182 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:29:23.182 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:29:23.182 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:29:23.182 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:29:23.182 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:29:23.182 + rm -f /tmp/spdk-ld-path 00:29:23.182 + source autorun-spdk.conf 00:29:23.182 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:29:23.182 ++ SPDK_TEST_NVME=1 00:29:23.182 ++ SPDK_TEST_FTL=1 00:29:23.182 ++ SPDK_TEST_ISAL=1 00:29:23.182 ++ SPDK_RUN_ASAN=1 00:29:23.182 ++ SPDK_RUN_UBSAN=1 00:29:23.182 ++ SPDK_TEST_XNVME=1 00:29:23.182 ++ SPDK_TEST_NVME_FDP=1 00:29:23.182 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:29:23.182 ++ RUN_NIGHTLY=0 00:29:23.182 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:29:23.182 + [[ -n '' ]] 00:29:23.182 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:29:23.182 + for M in /var/spdk/build-*-manifest.txt 00:29:23.182 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:29:23.182 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:29:23.182 + for M in /var/spdk/build-*-manifest.txt 00:29:23.182 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:29:23.182 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:29:23.182 + for M in /var/spdk/build-*-manifest.txt 00:29:23.182 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:29:23.182 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:29:23.182 ++ uname 00:29:23.182 + [[ Linux == \L\i\n\u\x ]] 00:29:23.182 + sudo dmesg -T 00:29:23.442 + sudo dmesg --clear 00:29:23.442 + dmesg_pid=5249 00:29:23.442 + [[ Fedora Linux == FreeBSD ]] 00:29:23.442 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:23.442 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:23.442 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:29:23.442 + [[ -x /usr/src/fio-static/fio ]] 00:29:23.442 + sudo dmesg -Tw 00:29:23.442 + export FIO_BIN=/usr/src/fio-static/fio 00:29:23.442 + FIO_BIN=/usr/src/fio-static/fio 00:29:23.442 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:29:23.442 + [[ ! -v VFIO_QEMU_BIN ]] 00:29:23.442 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:29:23.442 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:23.442 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:23.442 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:29:23.442 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:23.442 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:23.442 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:29:23.442 17:26:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:29:23.442 17:26:24 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:29:23.442 17:26:24 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:29:23.442 17:26:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:29:23.442 17:26:24 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:29:23.442 17:26:24 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:29:23.442 17:26:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.442 17:26:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:23.442 17:26:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:23.442 17:26:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.442 17:26:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.442 17:26:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.442 17:26:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.442 17:26:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.442 17:26:24 -- paths/export.sh@5 -- $ export PATH 00:29:23.442 17:26:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.442 17:26:24 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:23.442 17:26:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:29:23.442 17:26:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732641984.XXXXXX 00:29:23.442 17:26:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732641984.wceO7V 00:29:23.442 17:26:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:29:23.442 17:26:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:29:23.442 17:26:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:23.442 17:26:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:23.442 17:26:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:23.442 17:26:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:29:23.442 17:26:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:29:23.442 17:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:29:23.442 17:26:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:23.442 17:26:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:29:23.442 17:26:24 -- pm/common@17 -- $ local monitor 00:29:23.442 17:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.442 17:26:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.442 17:26:24 -- pm/common@25 -- $ sleep 1 00:29:23.442 17:26:24 -- pm/common@21 -- $ date +%s 00:29:23.701 17:26:24 -- pm/common@21 -- $ date +%s 00:29:23.701 17:26:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641984 00:29:23.701 17:26:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732641984 00:29:23.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641984_collect-vmstat.pm.log 00:29:23.701 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732641984_collect-cpu-load.pm.log 00:29:24.639 17:26:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:29:24.639 17:26:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:29:24.639 17:26:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:29:24.639 17:26:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:24.639 17:26:25 -- spdk/autobuild.sh@16 -- $ date -u 00:29:24.639 Tue Nov 26 05:26:25 PM UTC 2024 00:29:24.639 17:26:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:29:24.639 v25.01-pre-264-gc86e5b182 00:29:24.639 17:26:25 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:29:24.639 17:26:25 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:29:24.639 17:26:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:24.639 17:26:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:24.639 17:26:25 -- common/autotest_common.sh@10 -- $ set +x 00:29:24.639 ************************************ 00:29:24.639 START TEST asan 00:29:24.639 ************************************ 00:29:24.639 using asan 00:29:24.639 17:26:25 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:29:24.639 00:29:24.639 real 0m0.000s 00:29:24.639 user 0m0.000s 00:29:24.639 sys 0m0.000s 00:29:24.639 17:26:25 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:29:24.639 ************************************ 00:29:24.639 END TEST asan 00:29:24.639 ************************************ 00:29:24.639 17:26:25 asan -- common/autotest_common.sh@10 -- $ set +x 00:29:24.639 17:26:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:29:24.639 17:26:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:29:24.639 17:26:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:24.639 17:26:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:24.639 17:26:25 -- common/autotest_common.sh@10 -- $ set +x 00:29:24.639 ************************************ 00:29:24.639 START TEST ubsan 00:29:24.639 ************************************ 00:29:24.639 using ubsan 00:29:24.639 17:26:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:29:24.639 00:29:24.639 real 0m0.000s 00:29:24.639 user 0m0.000s 00:29:24.639 sys 0m0.000s 00:29:24.639 ************************************ 00:29:24.639 END TEST ubsan 00:29:24.639 ************************************ 00:29:24.639 17:26:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:29:24.639 17:26:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:29:24.639 17:26:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:29:24.639 17:26:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:29:24.639 17:26:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:29:24.639 17:26:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:29:24.898 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:24.898 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:25.467 Using 'verbs' RDMA provider 00:29:41.379 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:29:59.520 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:29:59.520 Creating mk/config.mk...done. 00:29:59.520 Creating mk/cc.flags.mk...done. 00:29:59.520 Type 'make' to build. 00:29:59.520 17:26:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:29:59.521 17:26:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:29:59.521 17:26:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:29:59.521 17:26:58 -- common/autotest_common.sh@10 -- $ set +x 00:29:59.521 ************************************ 00:29:59.521 START TEST make 00:29:59.521 ************************************ 00:29:59.521 17:26:58 make -- common/autotest_common.sh@1129 -- $ make -j10 00:29:59.521 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:29:59.521 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:29:59.521 meson setup builddir \ 00:29:59.521 -Dwith-libaio=enabled \ 00:29:59.521 -Dwith-liburing=enabled \ 00:29:59.521 -Dwith-libvfn=disabled \ 00:29:59.521 -Dwith-spdk=disabled \ 00:29:59.521 -Dexamples=false \ 00:29:59.521 -Dtests=false \ 00:29:59.521 -Dtools=false && \ 00:29:59.521 meson compile -C builddir && \ 00:29:59.521 cd -) 00:29:59.521 make[1]: Nothing to be done for 'all'. 00:30:00.456 The Meson build system 00:30:00.456 Version: 1.5.0 00:30:00.456 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:30:00.456 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:30:00.456 Build type: native build 00:30:00.456 Project name: xnvme 00:30:00.456 Project version: 0.7.5 00:30:00.456 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:30:00.456 C linker for the host machine: cc ld.bfd 2.40-14 00:30:00.456 Host machine cpu family: x86_64 00:30:00.456 Host machine cpu: x86_64 00:30:00.456 Message: host_machine.system: linux 00:30:00.456 Compiler for C supports arguments -Wno-missing-braces: YES 00:30:00.456 Compiler for C supports arguments -Wno-cast-function-type: YES 00:30:00.456 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:30:00.456 Run-time dependency threads found: YES 00:30:00.456 Has header "setupapi.h" : NO 00:30:00.456 Has header "linux/blkzoned.h" : YES 00:30:00.456 Has header "linux/blkzoned.h" : YES (cached) 00:30:00.456 Has header "libaio.h" : YES 00:30:00.456 Library aio found: YES 00:30:00.456 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:30:00.456 Run-time dependency liburing found: YES 2.2 00:30:00.456 Dependency libvfn skipped: feature with-libvfn disabled 00:30:00.456 Found CMake: /usr/bin/cmake (3.27.7) 00:30:00.456 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:30:00.456 Subproject spdk : skipped: feature with-spdk disabled 00:30:00.456 Run-time dependency appleframeworks found: NO (tried framework) 00:30:00.456 Run-time dependency appleframeworks found: NO (tried framework) 00:30:00.456 Library rt found: YES 00:30:00.456 Checking for function "clock_gettime" with dependency -lrt: YES 00:30:00.456 Configuring xnvme_config.h using configuration 00:30:00.456 Configuring xnvme.spec using configuration 00:30:00.456 Run-time dependency bash-completion found: YES 2.11 00:30:00.456 Message: Bash-completions: /usr/share/bash-completion/completions 00:30:00.456 Program cp found: YES (/usr/bin/cp) 00:30:00.456 Build targets in project: 3 00:30:00.456 00:30:00.456 xnvme 0.7.5 00:30:00.456 00:30:00.456 Subprojects 00:30:00.456 spdk : NO Feature 'with-spdk' disabled 00:30:00.456 00:30:00.456 User defined options 00:30:00.456 examples : false 00:30:00.456 tests : false 00:30:00.456 tools : false 00:30:00.456 with-libaio : enabled 00:30:00.456 with-liburing: enabled 00:30:00.456 with-libvfn : disabled 00:30:00.456 with-spdk : disabled 00:30:00.456 00:30:00.456 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:30:00.715 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:30:00.715 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:30:00.974 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:30:00.974 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:30:00.974 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:30:00.974 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:30:00.974 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:30:00.974 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:30:00.974 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:30:00.974 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:30:00.974 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:30:00.974 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:30:00.974 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:30:00.974 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:30:00.974 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:30:00.974 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:30:00.974 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:30:00.974 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:30:00.974 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:30:00.974 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:30:00.974 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:30:00.974 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:30:00.974 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:30:00.974 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:30:01.233 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:30:01.233 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:30:01.233 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:30:01.233 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:30:01.233 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:30:01.233 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:30:01.233 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:30:01.233 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:30:01.233 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:30:01.233 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:30:01.233 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:30:01.233 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:30:01.233 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:30:01.233 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:30:01.233 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:30:01.233 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:30:01.233 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:30:01.233 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:30:01.233 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:30:01.233 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:30:01.233 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:30:01.234 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:30:01.234 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:30:01.234 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:30:01.234 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:30:01.234 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:30:01.234 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:30:01.234 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:30:01.234 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:30:01.234 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:30:01.493 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:30:01.493 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:30:01.493 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:30:01.493 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:30:01.493 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:30:01.493 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:30:01.493 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:30:01.493 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:30:01.493 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:30:01.493 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:30:01.493 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:30:01.493 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:30:01.493 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:30:01.493 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:30:01.493 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:30:01.751 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:30:01.751 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:30:01.751 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:30:01.751 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:30:01.751 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:30:02.010 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:30:02.010 [75/76] Linking static target lib/libxnvme.a 00:30:02.010 [76/76] Linking target lib/libxnvme.so.0.7.5 00:30:02.010 INFO: autodetecting backend as ninja 00:30:02.010 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:30:02.010 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:30:10.170 The Meson build system 00:30:10.170 Version: 1.5.0 00:30:10.170 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:30:10.170 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:30:10.170 Build type: native build 00:30:10.170 Program cat found: YES (/usr/bin/cat) 00:30:10.170 Project name: DPDK 00:30:10.170 Project version: 24.03.0 00:30:10.170 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:30:10.170 C linker for the host machine: cc ld.bfd 2.40-14 00:30:10.170 Host machine cpu family: x86_64 00:30:10.170 Host machine cpu: x86_64 00:30:10.170 Message: ## Building in Developer Mode ## 00:30:10.170 Program pkg-config found: YES (/usr/bin/pkg-config) 00:30:10.170 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:30:10.170 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:30:10.170 Program python3 found: YES (/usr/bin/python3) 00:30:10.170 Program cat found: YES (/usr/bin/cat) 00:30:10.170 Compiler for C supports arguments -march=native: YES 00:30:10.170 Checking for size of "void *" : 8 00:30:10.170 Checking for size of "void *" : 8 (cached) 00:30:10.170 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:30:10.170 Library m found: YES 00:30:10.170 Library numa found: YES 00:30:10.170 Has header "numaif.h" : YES 00:30:10.170 Library fdt found: NO 00:30:10.170 Library execinfo found: NO 00:30:10.170 Has header "execinfo.h" : YES 00:30:10.170 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:30:10.170 Run-time dependency libarchive found: NO (tried pkgconfig) 00:30:10.170 Run-time dependency libbsd found: NO (tried pkgconfig) 00:30:10.170 Run-time dependency jansson found: NO (tried pkgconfig) 00:30:10.170 Run-time dependency openssl found: YES 3.1.1 00:30:10.170 Run-time dependency libpcap found: YES 1.10.4 00:30:10.170 Has header "pcap.h" with dependency libpcap: YES 00:30:10.170 Compiler for C supports arguments -Wcast-qual: YES 00:30:10.170 Compiler for C supports arguments -Wdeprecated: YES 00:30:10.170 Compiler for C supports arguments -Wformat: YES 00:30:10.170 Compiler for C supports arguments -Wformat-nonliteral: NO 00:30:10.170 Compiler for C supports arguments -Wformat-security: NO 00:30:10.170 Compiler for C supports arguments -Wmissing-declarations: YES 00:30:10.170 Compiler for C supports arguments -Wmissing-prototypes: YES 00:30:10.170 Compiler for C supports arguments -Wnested-externs: YES 00:30:10.170 Compiler for C supports arguments -Wold-style-definition: YES 00:30:10.170 Compiler for C supports arguments -Wpointer-arith: YES 00:30:10.170 Compiler for C supports arguments -Wsign-compare: YES 00:30:10.170 Compiler for C supports arguments -Wstrict-prototypes: YES 00:30:10.170 Compiler for C supports arguments -Wundef: YES 00:30:10.170 Compiler for C supports arguments -Wwrite-strings: YES 00:30:10.170 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:30:10.171 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:30:10.171 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:30:10.171 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:30:10.171 Program objdump found: YES (/usr/bin/objdump) 00:30:10.171 Compiler for C supports arguments -mavx512f: YES 00:30:10.171 Checking if "AVX512 checking" compiles: YES 00:30:10.171 Fetching value of define "__SSE4_2__" : 1 00:30:10.171 Fetching value of define "__AES__" : 1 00:30:10.171 Fetching value of define "__AVX__" : 1 00:30:10.171 Fetching value of define "__AVX2__" : 1 00:30:10.171 Fetching value of define "__AVX512BW__" : 1 00:30:10.171 Fetching value of define "__AVX512CD__" : 1 00:30:10.171 Fetching value of define "__AVX512DQ__" : 1 00:30:10.171 Fetching value of define "__AVX512F__" : 1 00:30:10.171 Fetching value of define "__AVX512VL__" : 1 00:30:10.171 Fetching value of define "__PCLMUL__" : 1 00:30:10.171 Fetching value of define "__RDRND__" : 1 00:30:10.171 Fetching value of define "__RDSEED__" : 1 00:30:10.171 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:30:10.171 Fetching value of define "__znver1__" : (undefined) 00:30:10.171 Fetching value of define "__znver2__" : (undefined) 00:30:10.171 Fetching value of define "__znver3__" : (undefined) 00:30:10.171 Fetching value of define "__znver4__" : (undefined) 00:30:10.171 Library asan found: YES 00:30:10.171 Compiler for C supports arguments -Wno-format-truncation: YES 00:30:10.171 Message: lib/log: Defining dependency "log" 00:30:10.171 Message: lib/kvargs: Defining dependency "kvargs" 00:30:10.171 Message: lib/telemetry: Defining dependency "telemetry" 00:30:10.171 Library rt found: YES 00:30:10.171 Checking for function "getentropy" : NO 00:30:10.171 Message: lib/eal: Defining dependency "eal" 00:30:10.171 Message: lib/ring: Defining dependency "ring" 00:30:10.171 Message: lib/rcu: Defining dependency "rcu" 00:30:10.171 Message: lib/mempool: Defining dependency "mempool" 00:30:10.171 Message: lib/mbuf: Defining dependency "mbuf" 00:30:10.171 Fetching value of define "__PCLMUL__" : 1 (cached) 00:30:10.171 Fetching value of define "__AVX512F__" : 1 (cached) 00:30:10.171 Fetching value of define "__AVX512BW__" : 1 (cached) 00:30:10.171 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:30:10.171 Fetching value of define "__AVX512VL__" : 1 (cached) 00:30:10.171 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:30:10.171 Compiler for C supports arguments -mpclmul: YES 00:30:10.171 Compiler for C supports arguments -maes: YES 00:30:10.171 Compiler for C supports arguments -mavx512f: YES (cached) 00:30:10.171 Compiler for C supports arguments -mavx512bw: YES 00:30:10.171 Compiler for C supports arguments -mavx512dq: YES 00:30:10.171 Compiler for C supports arguments -mavx512vl: YES 00:30:10.171 Compiler for C supports arguments -mvpclmulqdq: YES 00:30:10.171 Compiler for C supports arguments -mavx2: YES 00:30:10.171 Compiler for C supports arguments -mavx: YES 00:30:10.171 Message: lib/net: Defining dependency "net" 00:30:10.171 Message: lib/meter: Defining dependency "meter" 00:30:10.171 Message: lib/ethdev: Defining dependency "ethdev" 00:30:10.171 Message: lib/pci: Defining dependency "pci" 00:30:10.171 Message: lib/cmdline: Defining dependency "cmdline" 00:30:10.171 Message: lib/hash: Defining dependency "hash" 00:30:10.171 Message: lib/timer: Defining dependency "timer" 00:30:10.171 Message: lib/compressdev: Defining dependency "compressdev" 00:30:10.171 Message: lib/cryptodev: Defining dependency "cryptodev" 00:30:10.171 Message: lib/dmadev: Defining dependency "dmadev" 00:30:10.171 Compiler for C supports arguments -Wno-cast-qual: YES 00:30:10.171 Message: lib/power: Defining dependency "power" 00:30:10.171 Message: lib/reorder: Defining dependency "reorder" 00:30:10.171 Message: lib/security: Defining dependency "security" 00:30:10.171 Has header "linux/userfaultfd.h" : YES 00:30:10.171 Has header "linux/vduse.h" : YES 00:30:10.171 Message: lib/vhost: Defining dependency "vhost" 00:30:10.171 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:30:10.171 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:30:10.171 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:30:10.171 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:30:10.171 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:30:10.171 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:30:10.171 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:30:10.171 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:30:10.171 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:30:10.171 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:30:10.171 Program doxygen found: YES (/usr/local/bin/doxygen) 00:30:10.171 Configuring doxy-api-html.conf using configuration 00:30:10.171 Configuring doxy-api-man.conf using configuration 00:30:10.171 Program mandb found: YES (/usr/bin/mandb) 00:30:10.171 Program sphinx-build found: NO 00:30:10.171 Configuring rte_build_config.h using configuration 00:30:10.171 Message: 00:30:10.171 ================= 00:30:10.171 Applications Enabled 00:30:10.171 ================= 00:30:10.171 00:30:10.171 apps: 00:30:10.171 00:30:10.171 00:30:10.171 Message: 00:30:10.171 ================= 00:30:10.171 Libraries Enabled 00:30:10.171 ================= 00:30:10.171 00:30:10.171 libs: 00:30:10.171 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:30:10.171 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:30:10.171 cryptodev, dmadev, power, reorder, security, vhost, 00:30:10.171 00:30:10.171 Message: 00:30:10.171 =============== 00:30:10.171 Drivers Enabled 00:30:10.171 =============== 00:30:10.171 00:30:10.171 common: 00:30:10.171 00:30:10.171 bus: 00:30:10.171 pci, vdev, 00:30:10.171 mempool: 00:30:10.171 ring, 00:30:10.171 dma: 00:30:10.171 00:30:10.171 net: 00:30:10.171 00:30:10.171 crypto: 00:30:10.171 00:30:10.171 compress: 00:30:10.171 00:30:10.171 vdpa: 00:30:10.171 00:30:10.171 00:30:10.171 Message: 00:30:10.171 ================= 00:30:10.171 Content Skipped 00:30:10.171 ================= 00:30:10.171 00:30:10.171 apps: 00:30:10.171 dumpcap: explicitly disabled via build config 00:30:10.171 graph: explicitly disabled via build config 00:30:10.171 pdump: explicitly disabled via build config 00:30:10.171 proc-info: explicitly disabled via build config 00:30:10.171 test-acl: explicitly disabled via build config 00:30:10.171 test-bbdev: explicitly disabled via build config 00:30:10.171 test-cmdline: explicitly disabled via build config 00:30:10.171 test-compress-perf: explicitly disabled via build config 00:30:10.171 test-crypto-perf: explicitly disabled via build config 00:30:10.171 test-dma-perf: explicitly disabled via build config 00:30:10.171 test-eventdev: explicitly disabled via build config 00:30:10.171 test-fib: explicitly disabled via build config 00:30:10.171 test-flow-perf: explicitly disabled via build config 00:30:10.171 test-gpudev: explicitly disabled via build config 00:30:10.171 test-mldev: explicitly disabled via build config 00:30:10.171 test-pipeline: explicitly disabled via build config 00:30:10.171 test-pmd: explicitly disabled via build config 00:30:10.171 test-regex: explicitly disabled via build config 00:30:10.171 test-sad: explicitly disabled via build config 00:30:10.171 test-security-perf: explicitly disabled via build config 00:30:10.171 00:30:10.171 libs: 00:30:10.171 argparse: explicitly disabled via build config 00:30:10.171 metrics: explicitly disabled via build config 00:30:10.171 acl: explicitly disabled via build config 00:30:10.171 bbdev: explicitly disabled via build config 00:30:10.171 bitratestats: explicitly disabled via build config 00:30:10.171 bpf: explicitly disabled via build config 00:30:10.171 cfgfile: explicitly disabled via build config 00:30:10.171 distributor: explicitly disabled via build config 00:30:10.171 efd: explicitly disabled via build config 00:30:10.171 eventdev: explicitly disabled via build config 00:30:10.171 dispatcher: explicitly disabled via build config 00:30:10.171 gpudev: explicitly disabled via build config 00:30:10.171 gro: explicitly disabled via build config 00:30:10.171 gso: explicitly disabled via build config 00:30:10.171 ip_frag: explicitly disabled via build config 00:30:10.171 jobstats: explicitly disabled via build config 00:30:10.171 latencystats: explicitly disabled via build config 00:30:10.171 lpm: explicitly disabled via build config 00:30:10.171 member: explicitly disabled via build config 00:30:10.171 pcapng: explicitly disabled via build config 00:30:10.171 rawdev: explicitly disabled via build config 00:30:10.171 regexdev: explicitly disabled via build config 00:30:10.171 mldev: explicitly disabled via build config 00:30:10.171 rib: explicitly disabled via build config 00:30:10.171 sched: explicitly disabled via build config 00:30:10.171 stack: explicitly disabled via build config 00:30:10.171 ipsec: explicitly disabled via build config 00:30:10.171 pdcp: explicitly disabled via build config 00:30:10.171 fib: explicitly disabled via build config 00:30:10.171 port: explicitly disabled via build config 00:30:10.171 pdump: explicitly disabled via build config 00:30:10.171 table: explicitly disabled via build config 00:30:10.171 pipeline: explicitly disabled via build config 00:30:10.171 graph: explicitly disabled via build config 00:30:10.171 node: explicitly disabled via build config 00:30:10.171 00:30:10.171 drivers: 00:30:10.171 common/cpt: not in enabled drivers build config 00:30:10.171 common/dpaax: not in enabled drivers build config 00:30:10.171 common/iavf: not in enabled drivers build config 00:30:10.171 common/idpf: not in enabled drivers build config 00:30:10.171 common/ionic: not in enabled drivers build config 00:30:10.171 common/mvep: not in enabled drivers build config 00:30:10.171 common/octeontx: not in enabled drivers build config 00:30:10.171 bus/auxiliary: not in enabled drivers build config 00:30:10.171 bus/cdx: not in enabled drivers build config 00:30:10.171 bus/dpaa: not in enabled drivers build config 00:30:10.171 bus/fslmc: not in enabled drivers build config 00:30:10.171 bus/ifpga: not in enabled drivers build config 00:30:10.172 bus/platform: not in enabled drivers build config 00:30:10.172 bus/uacce: not in enabled drivers build config 00:30:10.172 bus/vmbus: not in enabled drivers build config 00:30:10.172 common/cnxk: not in enabled drivers build config 00:30:10.172 common/mlx5: not in enabled drivers build config 00:30:10.172 common/nfp: not in enabled drivers build config 00:30:10.172 common/nitrox: not in enabled drivers build config 00:30:10.172 common/qat: not in enabled drivers build config 00:30:10.172 common/sfc_efx: not in enabled drivers build config 00:30:10.172 mempool/bucket: not in enabled drivers build config 00:30:10.172 mempool/cnxk: not in enabled drivers build config 00:30:10.172 mempool/dpaa: not in enabled drivers build config 00:30:10.172 mempool/dpaa2: not in enabled drivers build config 00:30:10.172 mempool/octeontx: not in enabled drivers build config 00:30:10.172 mempool/stack: not in enabled drivers build config 00:30:10.172 dma/cnxk: not in enabled drivers build config 00:30:10.172 dma/dpaa: not in enabled drivers build config 00:30:10.172 dma/dpaa2: not in enabled drivers build config 00:30:10.172 dma/hisilicon: not in enabled drivers build config 00:30:10.172 dma/idxd: not in enabled drivers build config 00:30:10.172 dma/ioat: not in enabled drivers build config 00:30:10.172 dma/skeleton: not in enabled drivers build config 00:30:10.172 net/af_packet: not in enabled drivers build config 00:30:10.172 net/af_xdp: not in enabled drivers build config 00:30:10.172 net/ark: not in enabled drivers build config 00:30:10.172 net/atlantic: not in enabled drivers build config 00:30:10.172 net/avp: not in enabled drivers build config 00:30:10.172 net/axgbe: not in enabled drivers build config 00:30:10.172 net/bnx2x: not in enabled drivers build config 00:30:10.172 net/bnxt: not in enabled drivers build config 00:30:10.172 net/bonding: not in enabled drivers build config 00:30:10.172 net/cnxk: not in enabled drivers build config 00:30:10.172 net/cpfl: not in enabled drivers build config 00:30:10.172 net/cxgbe: not in enabled drivers build config 00:30:10.172 net/dpaa: not in enabled drivers build config 00:30:10.172 net/dpaa2: not in enabled drivers build config 00:30:10.172 net/e1000: not in enabled drivers build config 00:30:10.172 net/ena: not in enabled drivers build config 00:30:10.172 net/enetc: not in enabled drivers build config 00:30:10.172 net/enetfec: not in enabled drivers build config 00:30:10.172 net/enic: not in enabled drivers build config 00:30:10.172 net/failsafe: not in enabled drivers build config 00:30:10.172 net/fm10k: not in enabled drivers build config 00:30:10.172 net/gve: not in enabled drivers build config 00:30:10.172 net/hinic: not in enabled drivers build config 00:30:10.172 net/hns3: not in enabled drivers build config 00:30:10.172 net/i40e: not in enabled drivers build config 00:30:10.172 net/iavf: not in enabled drivers build config 00:30:10.172 net/ice: not in enabled drivers build config 00:30:10.172 net/idpf: not in enabled drivers build config 00:30:10.172 net/igc: not in enabled drivers build config 00:30:10.172 net/ionic: not in enabled drivers build config 00:30:10.172 net/ipn3ke: not in enabled drivers build config 00:30:10.172 net/ixgbe: not in enabled drivers build config 00:30:10.172 net/mana: not in enabled drivers build config 00:30:10.172 net/memif: not in enabled drivers build config 00:30:10.172 net/mlx4: not in enabled drivers build config 00:30:10.172 net/mlx5: not in enabled drivers build config 00:30:10.172 net/mvneta: not in enabled drivers build config 00:30:10.172 net/mvpp2: not in enabled drivers build config 00:30:10.172 net/netvsc: not in enabled drivers build config 00:30:10.172 net/nfb: not in enabled drivers build config 00:30:10.172 net/nfp: not in enabled drivers build config 00:30:10.172 net/ngbe: not in enabled drivers build config 00:30:10.172 net/null: not in enabled drivers build config 00:30:10.172 net/octeontx: not in enabled drivers build config 00:30:10.172 net/octeon_ep: not in enabled drivers build config 00:30:10.172 net/pcap: not in enabled drivers build config 00:30:10.172 net/pfe: not in enabled drivers build config 00:30:10.172 net/qede: not in enabled drivers build config 00:30:10.172 net/ring: not in enabled drivers build config 00:30:10.172 net/sfc: not in enabled drivers build config 00:30:10.172 net/softnic: not in enabled drivers build config 00:30:10.172 net/tap: not in enabled drivers build config 00:30:10.172 net/thunderx: not in enabled drivers build config 00:30:10.172 net/txgbe: not in enabled drivers build config 00:30:10.172 net/vdev_netvsc: not in enabled drivers build config 00:30:10.172 net/vhost: not in enabled drivers build config 00:30:10.172 net/virtio: not in enabled drivers build config 00:30:10.172 net/vmxnet3: not in enabled drivers build config 00:30:10.172 raw/*: missing internal dependency, "rawdev" 00:30:10.172 crypto/armv8: not in enabled drivers build config 00:30:10.172 crypto/bcmfs: not in enabled drivers build config 00:30:10.172 crypto/caam_jr: not in enabled drivers build config 00:30:10.172 crypto/ccp: not in enabled drivers build config 00:30:10.172 crypto/cnxk: not in enabled drivers build config 00:30:10.172 crypto/dpaa_sec: not in enabled drivers build config 00:30:10.172 crypto/dpaa2_sec: not in enabled drivers build config 00:30:10.172 crypto/ipsec_mb: not in enabled drivers build config 00:30:10.172 crypto/mlx5: not in enabled drivers build config 00:30:10.172 crypto/mvsam: not in enabled drivers build config 00:30:10.172 crypto/nitrox: not in enabled drivers build config 00:30:10.172 crypto/null: not in enabled drivers build config 00:30:10.172 crypto/octeontx: not in enabled drivers build config 00:30:10.172 crypto/openssl: not in enabled drivers build config 00:30:10.172 crypto/scheduler: not in enabled drivers build config 00:30:10.172 crypto/uadk: not in enabled drivers build config 00:30:10.172 crypto/virtio: not in enabled drivers build config 00:30:10.172 compress/isal: not in enabled drivers build config 00:30:10.172 compress/mlx5: not in enabled drivers build config 00:30:10.172 compress/nitrox: not in enabled drivers build config 00:30:10.172 compress/octeontx: not in enabled drivers build config 00:30:10.172 compress/zlib: not in enabled drivers build config 00:30:10.172 regex/*: missing internal dependency, "regexdev" 00:30:10.172 ml/*: missing internal dependency, "mldev" 00:30:10.172 vdpa/ifc: not in enabled drivers build config 00:30:10.172 vdpa/mlx5: not in enabled drivers build config 00:30:10.172 vdpa/nfp: not in enabled drivers build config 00:30:10.172 vdpa/sfc: not in enabled drivers build config 00:30:10.172 event/*: missing internal dependency, "eventdev" 00:30:10.172 baseband/*: missing internal dependency, "bbdev" 00:30:10.172 gpu/*: missing internal dependency, "gpudev" 00:30:10.172 00:30:10.172 00:30:10.172 Build targets in project: 85 00:30:10.172 00:30:10.172 DPDK 24.03.0 00:30:10.172 00:30:10.172 User defined options 00:30:10.172 buildtype : debug 00:30:10.172 default_library : shared 00:30:10.172 libdir : lib 00:30:10.172 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:10.172 b_sanitize : address 00:30:10.172 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:30:10.172 c_link_args : 00:30:10.172 cpu_instruction_set: native 00:30:10.172 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:30:10.172 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:30:10.172 enable_docs : false 00:30:10.172 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:30:10.172 enable_kmods : false 00:30:10.172 max_lcores : 128 00:30:10.172 tests : false 00:30:10.172 00:30:10.172 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:30:10.172 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:30:10.172 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:30:10.172 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:30:10.172 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:30:10.172 [4/268] Linking static target lib/librte_kvargs.a 00:30:10.172 [5/268] Linking static target lib/librte_log.a 00:30:10.172 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:30:10.431 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:30:10.431 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:30:10.431 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:30:10.431 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:30:10.431 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:30:10.431 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:30:10.431 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:30:10.431 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:30:10.431 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:30:10.690 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:30:10.690 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:30:10.690 [18/268] Linking static target lib/librte_telemetry.a 00:30:10.949 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:30:10.949 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:30:10.949 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:30:10.949 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:30:11.209 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:30:11.209 [24/268] Linking target lib/librte_log.so.24.1 00:30:11.209 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:30:11.209 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:30:11.209 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:30:11.209 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:30:11.209 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:30:11.209 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:30:11.468 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:30:11.468 [32/268] Linking target lib/librte_kvargs.so.24.1 00:30:11.468 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:30:11.468 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:30:11.727 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:30:11.727 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:30:11.727 [37/268] Linking target lib/librte_telemetry.so.24.1 00:30:11.727 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:30:11.727 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:30:11.727 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:30:11.727 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:30:11.727 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:30:11.727 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:30:11.727 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:30:11.987 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:30:11.987 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:30:11.987 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:30:11.987 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:30:12.246 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:30:12.246 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:30:12.506 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:30:12.506 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:30:12.506 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:30:12.506 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:30:12.506 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:30:12.506 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:30:12.766 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:30:12.766 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:30:12.766 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:30:12.766 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:30:12.766 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:30:13.026 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:30:13.026 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:30:13.026 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:30:13.026 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:30:13.026 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:30:13.026 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:30:13.286 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:30:13.545 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:30:13.545 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:30:13.545 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:30:13.545 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:30:13.545 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:30:13.545 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:30:13.545 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:30:13.545 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:30:13.545 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:30:13.804 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:30:13.804 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:30:13.804 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:30:13.804 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:30:14.062 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:30:14.062 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:30:14.062 [84/268] Linking static target lib/librte_ring.a 00:30:14.062 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:30:14.063 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:30:14.063 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:30:14.063 [88/268] Linking static target lib/librte_eal.a 00:30:14.063 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:30:14.322 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:30:14.322 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:30:14.322 [92/268] Linking static target lib/librte_mempool.a 00:30:14.322 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:30:14.581 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:30:14.581 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:30:14.581 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:30:14.581 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:30:14.581 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:30:14.581 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:30:14.840 [100/268] Linking static target lib/librte_rcu.a 00:30:14.840 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:30:14.840 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:30:14.840 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:30:15.099 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:30:15.099 [105/268] Linking static target lib/librte_meter.a 00:30:15.099 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:30:15.099 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:30:15.099 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:30:15.099 [109/268] Linking static target lib/librte_mbuf.a 00:30:15.099 [110/268] Linking static target lib/librte_net.a 00:30:15.358 [111/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:30:15.358 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:30:15.358 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:30:15.358 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:30:15.618 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:30:15.618 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:30:15.618 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:30:15.878 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:30:16.137 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:30:16.137 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:30:16.137 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:30:16.396 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:30:16.396 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:30:16.396 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:30:16.396 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:30:16.655 [126/268] Linking static target lib/librte_pci.a 00:30:16.655 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:30:16.655 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:30:16.655 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:30:16.914 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:30:16.914 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:30:16.914 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:30:16.914 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:30:16.914 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:30:16.914 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:30:16.914 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:30:16.914 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:30:16.914 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:30:16.914 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:30:16.914 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:30:17.192 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:30:17.192 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:30:17.192 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:30:17.192 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:30:17.192 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:30:17.500 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:30:17.500 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:30:17.500 [148/268] Linking static target lib/librte_cmdline.a 00:30:17.500 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:30:17.500 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:30:17.759 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:30:17.759 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:30:17.759 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:30:17.759 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:30:17.759 [155/268] Linking static target lib/librte_timer.a 00:30:18.018 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:30:18.018 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:30:18.018 [158/268] Linking static target lib/librte_ethdev.a 00:30:18.018 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:30:18.276 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:30:18.276 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:30:18.534 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:30:18.534 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:30:18.534 [164/268] Linking static target lib/librte_compressdev.a 00:30:18.534 [165/268] Linking static target lib/librte_dmadev.a 00:30:18.534 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:30:18.534 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:30:18.793 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:30:18.793 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:30:18.793 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:30:18.793 [171/268] Linking static target lib/librte_hash.a 00:30:18.793 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:30:19.051 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:30:19.051 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:30:19.310 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:30:19.310 [176/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:19.310 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:30:19.310 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:30:19.310 [179/268] Linking static target lib/librte_cryptodev.a 00:30:19.310 [180/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:19.310 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:30:19.569 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:30:19.569 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:30:19.827 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:30:19.827 [185/268] Linking static target lib/librte_power.a 00:30:19.827 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:30:20.086 [187/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:30:20.086 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:30:20.086 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:30:20.086 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:30:20.086 [191/268] Linking static target lib/librte_reorder.a 00:30:20.086 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:30:20.086 [193/268] Linking static target lib/librte_security.a 00:30:20.656 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:30:20.967 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:30:20.967 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:30:20.967 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:30:20.967 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:30:20.967 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:30:21.226 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:30:21.226 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:30:21.483 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:30:21.483 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:30:21.483 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:30:21.483 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:30:21.740 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:21.740 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:30:21.740 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:30:21.740 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:30:21.740 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:30:21.740 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:30:21.998 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:30:21.998 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:30:21.998 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:30:22.257 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:30:22.257 [216/268] Linking static target drivers/librte_bus_vdev.a 00:30:22.257 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:30:22.257 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:30:22.257 [219/268] Linking static target drivers/librte_bus_pci.a 00:30:22.257 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:30:22.257 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:30:22.515 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:30:22.515 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:30:22.515 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:30:22.515 [225/268] Linking static target drivers/librte_mempool_ring.a 00:30:22.515 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:22.773 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:30:23.339 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:30:26.625 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:30:26.625 [230/268] Linking target lib/librte_eal.so.24.1 00:30:26.884 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:30:26.884 [232/268] Linking target lib/librte_ring.so.24.1 00:30:26.884 [233/268] Linking target lib/librte_meter.so.24.1 00:30:26.884 [234/268] Linking target lib/librte_pci.so.24.1 00:30:26.884 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:30:26.884 [236/268] Linking target lib/librte_dmadev.so.24.1 00:30:26.884 [237/268] Linking target lib/librte_timer.so.24.1 00:30:26.884 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:30:27.142 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:30:27.142 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:30:27.142 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:30:27.142 [242/268] Linking target lib/librte_mempool.so.24.1 00:30:27.142 [243/268] Linking target lib/librte_rcu.so.24.1 00:30:27.142 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:30:27.142 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:30:27.142 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:30:27.142 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:30:27.142 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:30:27.142 [249/268] Linking target lib/librte_mbuf.so.24.1 00:30:27.142 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:30:27.399 [251/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:30:27.399 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:30:27.399 [253/268] Linking static target lib/librte_vhost.a 00:30:27.399 [254/268] Linking target lib/librte_reorder.so.24.1 00:30:27.399 [255/268] Linking target lib/librte_compressdev.so.24.1 00:30:27.399 [256/268] Linking target lib/librte_net.so.24.1 00:30:27.399 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:30:27.658 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:30:27.658 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:30:27.658 [260/268] Linking target lib/librte_cmdline.so.24.1 00:30:27.658 [261/268] Linking target lib/librte_hash.so.24.1 00:30:27.658 [262/268] Linking target lib/librte_security.so.24.1 00:30:27.658 [263/268] Linking target lib/librte_ethdev.so.24.1 00:30:27.658 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:30:27.658 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:30:27.916 [266/268] Linking target lib/librte_power.so.24.1 00:30:29.821 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:30:29.821 [268/268] Linking target lib/librte_vhost.so.24.1 00:30:29.821 INFO: autodetecting backend as ninja 00:30:29.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:30:47.909 CC lib/ut_mock/mock.o 00:30:47.909 CC lib/ut/ut.o 00:30:47.909 CC lib/log/log.o 00:30:47.909 CC lib/log/log_deprecated.o 00:30:47.909 CC lib/log/log_flags.o 00:30:47.909 LIB libspdk_ut_mock.a 00:30:47.909 LIB libspdk_log.a 00:30:47.909 SO libspdk_ut_mock.so.6.0 00:30:47.909 LIB libspdk_ut.a 00:30:47.909 SO libspdk_log.so.7.1 00:30:47.909 SYMLINK libspdk_ut_mock.so 00:30:47.909 SO libspdk_ut.so.2.0 00:30:47.909 SYMLINK libspdk_log.so 00:30:47.909 SYMLINK libspdk_ut.so 00:30:47.909 CC lib/util/bit_array.o 00:30:47.909 CC lib/util/cpuset.o 00:30:47.909 CC lib/util/crc16.o 00:30:47.909 CC lib/util/base64.o 00:30:47.909 CC lib/util/crc32c.o 00:30:47.909 CC lib/util/crc32.o 00:30:47.909 CC lib/dma/dma.o 00:30:47.909 CC lib/ioat/ioat.o 00:30:47.909 CXX lib/trace_parser/trace.o 00:30:47.909 CC lib/vfio_user/host/vfio_user_pci.o 00:30:47.909 CC lib/util/crc32_ieee.o 00:30:47.909 CC lib/util/crc64.o 00:30:47.909 CC lib/util/dif.o 00:30:47.909 CC lib/util/fd.o 00:30:47.909 CC lib/util/fd_group.o 00:30:47.909 LIB libspdk_dma.a 00:30:47.909 CC lib/util/file.o 00:30:47.909 CC lib/vfio_user/host/vfio_user.o 00:30:47.909 SO libspdk_dma.so.5.0 00:30:47.909 CC lib/util/hexlify.o 00:30:47.909 SYMLINK libspdk_dma.so 00:30:47.909 CC lib/util/iov.o 00:30:47.909 CC lib/util/math.o 00:30:47.909 LIB libspdk_ioat.a 00:30:47.909 CC lib/util/net.o 00:30:47.909 SO libspdk_ioat.so.7.0 00:30:47.909 CC lib/util/pipe.o 00:30:47.909 SYMLINK libspdk_ioat.so 00:30:47.909 CC lib/util/strerror_tls.o 00:30:47.909 CC lib/util/string.o 00:30:47.909 LIB libspdk_vfio_user.a 00:30:47.909 CC lib/util/uuid.o 00:30:47.909 SO libspdk_vfio_user.so.5.0 00:30:47.909 CC lib/util/xor.o 00:30:47.909 CC lib/util/zipf.o 00:30:47.909 SYMLINK libspdk_vfio_user.so 00:30:47.909 CC lib/util/md5.o 00:30:47.909 LIB libspdk_util.a 00:30:47.909 SO libspdk_util.so.10.1 00:30:47.909 LIB libspdk_trace_parser.a 00:30:47.909 SO libspdk_trace_parser.so.6.0 00:30:47.909 SYMLINK libspdk_util.so 00:30:47.909 SYMLINK libspdk_trace_parser.so 00:30:47.909 CC lib/conf/conf.o 00:30:47.909 CC lib/json/json_util.o 00:30:47.909 CC lib/json/json_parse.o 00:30:47.909 CC lib/json/json_write.o 00:30:47.909 CC lib/rdma_utils/rdma_utils.o 00:30:47.909 CC lib/vmd/vmd.o 00:30:47.909 CC lib/idxd/idxd_user.o 00:30:47.909 CC lib/idxd/idxd.o 00:30:47.909 CC lib/vmd/led.o 00:30:47.909 CC lib/env_dpdk/env.o 00:30:47.909 CC lib/idxd/idxd_kernel.o 00:30:47.909 LIB libspdk_conf.a 00:30:47.909 SO libspdk_conf.so.6.0 00:30:47.909 CC lib/env_dpdk/memory.o 00:30:47.909 CC lib/env_dpdk/pci.o 00:30:47.909 SYMLINK libspdk_conf.so 00:30:47.909 CC lib/env_dpdk/init.o 00:30:47.909 CC lib/env_dpdk/threads.o 00:30:47.909 CC lib/env_dpdk/pci_ioat.o 00:30:47.909 LIB libspdk_rdma_utils.a 00:30:47.909 LIB libspdk_json.a 00:30:47.909 SO libspdk_rdma_utils.so.1.0 00:30:47.909 SO libspdk_json.so.6.0 00:30:47.909 CC lib/env_dpdk/pci_virtio.o 00:30:47.909 SYMLINK libspdk_rdma_utils.so 00:30:47.909 CC lib/env_dpdk/pci_vmd.o 00:30:47.909 CC lib/env_dpdk/pci_idxd.o 00:30:47.909 SYMLINK libspdk_json.so 00:30:47.909 CC lib/env_dpdk/pci_event.o 00:30:47.909 CC lib/env_dpdk/sigbus_handler.o 00:30:47.909 CC lib/env_dpdk/pci_dpdk.o 00:30:47.909 CC lib/rdma_provider/common.o 00:30:47.909 CC lib/env_dpdk/pci_dpdk_2207.o 00:30:47.909 LIB libspdk_idxd.a 00:30:48.168 SO libspdk_idxd.so.12.1 00:30:48.168 CC lib/rdma_provider/rdma_provider_verbs.o 00:30:48.168 CC lib/env_dpdk/pci_dpdk_2211.o 00:30:48.168 CC lib/jsonrpc/jsonrpc_server.o 00:30:48.168 SYMLINK libspdk_idxd.so 00:30:48.168 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:30:48.168 CC lib/jsonrpc/jsonrpc_client.o 00:30:48.168 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:30:48.168 LIB libspdk_vmd.a 00:30:48.168 SO libspdk_vmd.so.6.0 00:30:48.168 SYMLINK libspdk_vmd.so 00:30:48.168 LIB libspdk_rdma_provider.a 00:30:48.427 SO libspdk_rdma_provider.so.7.0 00:30:48.427 SYMLINK libspdk_rdma_provider.so 00:30:48.427 LIB libspdk_jsonrpc.a 00:30:48.427 SO libspdk_jsonrpc.so.6.0 00:30:48.427 SYMLINK libspdk_jsonrpc.so 00:30:49.001 CC lib/rpc/rpc.o 00:30:49.001 LIB libspdk_env_dpdk.a 00:30:49.260 LIB libspdk_rpc.a 00:30:49.260 SO libspdk_env_dpdk.so.15.1 00:30:49.260 SO libspdk_rpc.so.6.0 00:30:49.260 SYMLINK libspdk_rpc.so 00:30:49.260 SYMLINK libspdk_env_dpdk.so 00:30:49.518 CC lib/trace/trace_flags.o 00:30:49.518 CC lib/trace/trace.o 00:30:49.518 CC lib/trace/trace_rpc.o 00:30:49.518 CC lib/keyring/keyring_rpc.o 00:30:49.518 CC lib/keyring/keyring.o 00:30:49.518 CC lib/notify/notify.o 00:30:49.518 CC lib/notify/notify_rpc.o 00:30:49.778 LIB libspdk_notify.a 00:30:49.778 SO libspdk_notify.so.6.0 00:30:49.778 LIB libspdk_keyring.a 00:30:49.778 LIB libspdk_trace.a 00:30:50.036 SO libspdk_keyring.so.2.0 00:30:50.036 SYMLINK libspdk_notify.so 00:30:50.036 SO libspdk_trace.so.11.0 00:30:50.036 SYMLINK libspdk_keyring.so 00:30:50.036 SYMLINK libspdk_trace.so 00:30:50.295 CC lib/thread/iobuf.o 00:30:50.295 CC lib/thread/thread.o 00:30:50.295 CC lib/sock/sock.o 00:30:50.295 CC lib/sock/sock_rpc.o 00:30:50.862 LIB libspdk_sock.a 00:30:50.862 SO libspdk_sock.so.10.0 00:30:50.862 SYMLINK libspdk_sock.so 00:30:51.441 CC lib/nvme/nvme_ctrlr.o 00:30:51.441 CC lib/nvme/nvme_ctrlr_cmd.o 00:30:51.441 CC lib/nvme/nvme_fabric.o 00:30:51.441 CC lib/nvme/nvme_ns_cmd.o 00:30:51.441 CC lib/nvme/nvme_ns.o 00:30:51.441 CC lib/nvme/nvme_pcie_common.o 00:30:51.441 CC lib/nvme/nvme_pcie.o 00:30:51.441 CC lib/nvme/nvme_qpair.o 00:30:51.441 CC lib/nvme/nvme.o 00:30:52.009 CC lib/nvme/nvme_quirks.o 00:30:52.009 LIB libspdk_thread.a 00:30:52.009 SO libspdk_thread.so.11.0 00:30:52.009 CC lib/nvme/nvme_transport.o 00:30:52.268 CC lib/nvme/nvme_discovery.o 00:30:52.268 SYMLINK libspdk_thread.so 00:30:52.268 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:30:52.268 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:30:52.268 CC lib/nvme/nvme_tcp.o 00:30:52.268 CC lib/nvme/nvme_opal.o 00:30:52.268 CC lib/nvme/nvme_io_msg.o 00:30:52.527 CC lib/nvme/nvme_poll_group.o 00:30:52.785 CC lib/accel/accel.o 00:30:52.785 CC lib/accel/accel_rpc.o 00:30:52.785 CC lib/accel/accel_sw.o 00:30:52.786 CC lib/nvme/nvme_zns.o 00:30:53.044 CC lib/nvme/nvme_stubs.o 00:30:53.044 CC lib/nvme/nvme_auth.o 00:30:53.044 CC lib/blob/blobstore.o 00:30:53.044 CC lib/init/json_config.o 00:30:53.044 CC lib/blob/request.o 00:30:53.303 CC lib/blob/zeroes.o 00:30:53.303 CC lib/init/subsystem.o 00:30:53.303 CC lib/nvme/nvme_cuse.o 00:30:53.561 CC lib/nvme/nvme_rdma.o 00:30:53.561 CC lib/blob/blob_bs_dev.o 00:30:53.561 CC lib/init/subsystem_rpc.o 00:30:53.561 CC lib/virtio/virtio.o 00:30:53.821 CC lib/init/rpc.o 00:30:53.821 CC lib/virtio/virtio_vhost_user.o 00:30:53.821 CC lib/virtio/virtio_vfio_user.o 00:30:53.821 LIB libspdk_init.a 00:30:53.821 CC lib/virtio/virtio_pci.o 00:30:53.821 SO libspdk_init.so.6.0 00:30:54.080 LIB libspdk_accel.a 00:30:54.080 SYMLINK libspdk_init.so 00:30:54.080 SO libspdk_accel.so.16.0 00:30:54.080 SYMLINK libspdk_accel.so 00:30:54.080 CC lib/fsdev/fsdev.o 00:30:54.080 CC lib/fsdev/fsdev_io.o 00:30:54.080 CC lib/fsdev/fsdev_rpc.o 00:30:54.080 CC lib/event/app.o 00:30:54.080 CC lib/event/reactor.o 00:30:54.339 CC lib/bdev/bdev.o 00:30:54.339 LIB libspdk_virtio.a 00:30:54.339 CC lib/bdev/bdev_rpc.o 00:30:54.339 SO libspdk_virtio.so.7.0 00:30:54.339 CC lib/event/log_rpc.o 00:30:54.339 SYMLINK libspdk_virtio.so 00:30:54.339 CC lib/event/app_rpc.o 00:30:54.598 CC lib/event/scheduler_static.o 00:30:54.598 CC lib/bdev/bdev_zone.o 00:30:54.598 CC lib/bdev/part.o 00:30:54.598 CC lib/bdev/scsi_nvme.o 00:30:54.598 LIB libspdk_event.a 00:30:54.856 SO libspdk_event.so.14.0 00:30:54.856 SYMLINK libspdk_event.so 00:30:54.856 LIB libspdk_fsdev.a 00:30:54.856 LIB libspdk_nvme.a 00:30:54.856 SO libspdk_fsdev.so.2.0 00:30:54.856 SYMLINK libspdk_fsdev.so 00:30:55.115 SO libspdk_nvme.so.15.0 00:30:55.374 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:30:55.374 SYMLINK libspdk_nvme.so 00:30:56.310 LIB libspdk_fuse_dispatcher.a 00:30:56.310 SO libspdk_fuse_dispatcher.so.1.0 00:30:56.310 SYMLINK libspdk_fuse_dispatcher.so 00:30:56.878 LIB libspdk_blob.a 00:30:56.878 SO libspdk_blob.so.12.0 00:30:57.137 SYMLINK libspdk_blob.so 00:30:57.396 LIB libspdk_bdev.a 00:30:57.396 CC lib/lvol/lvol.o 00:30:57.396 CC lib/blobfs/blobfs.o 00:30:57.396 CC lib/blobfs/tree.o 00:30:57.396 SO libspdk_bdev.so.17.0 00:30:57.655 SYMLINK libspdk_bdev.so 00:30:57.915 CC lib/scsi/dev.o 00:30:57.915 CC lib/scsi/lun.o 00:30:57.915 CC lib/scsi/scsi.o 00:30:57.915 CC lib/scsi/port.o 00:30:57.915 CC lib/nbd/nbd.o 00:30:57.915 CC lib/ftl/ftl_core.o 00:30:57.915 CC lib/ublk/ublk.o 00:30:57.915 CC lib/nvmf/ctrlr.o 00:30:57.915 CC lib/nvmf/ctrlr_discovery.o 00:30:57.915 CC lib/ftl/ftl_init.o 00:30:58.174 CC lib/ftl/ftl_layout.o 00:30:58.174 CC lib/scsi/scsi_bdev.o 00:30:58.174 CC lib/nbd/nbd_rpc.o 00:30:58.441 CC lib/ftl/ftl_debug.o 00:30:58.441 CC lib/ftl/ftl_io.o 00:30:58.441 LIB libspdk_nbd.a 00:30:58.441 SO libspdk_nbd.so.7.0 00:30:58.441 CC lib/ftl/ftl_sb.o 00:30:58.441 LIB libspdk_blobfs.a 00:30:58.441 SYMLINK libspdk_nbd.so 00:30:58.441 CC lib/ftl/ftl_l2p.o 00:30:58.441 SO libspdk_blobfs.so.11.0 00:30:58.441 LIB libspdk_lvol.a 00:30:58.739 CC lib/ublk/ublk_rpc.o 00:30:58.739 SO libspdk_lvol.so.11.0 00:30:58.739 CC lib/scsi/scsi_pr.o 00:30:58.739 CC lib/nvmf/ctrlr_bdev.o 00:30:58.739 SYMLINK libspdk_blobfs.so 00:30:58.739 CC lib/nvmf/subsystem.o 00:30:58.739 CC lib/nvmf/nvmf.o 00:30:58.739 SYMLINK libspdk_lvol.so 00:30:58.739 CC lib/scsi/scsi_rpc.o 00:30:58.739 CC lib/ftl/ftl_l2p_flat.o 00:30:58.739 CC lib/ftl/ftl_nv_cache.o 00:30:58.739 CC lib/nvmf/nvmf_rpc.o 00:30:58.739 LIB libspdk_ublk.a 00:30:58.739 SO libspdk_ublk.so.3.0 00:30:58.739 CC lib/nvmf/transport.o 00:30:58.998 SYMLINK libspdk_ublk.so 00:30:58.998 CC lib/nvmf/tcp.o 00:30:58.998 CC lib/nvmf/stubs.o 00:30:58.998 CC lib/scsi/task.o 00:30:59.257 LIB libspdk_scsi.a 00:30:59.257 SO libspdk_scsi.so.9.0 00:30:59.257 SYMLINK libspdk_scsi.so 00:30:59.257 CC lib/nvmf/mdns_server.o 00:30:59.257 CC lib/nvmf/rdma.o 00:30:59.515 CC lib/nvmf/auth.o 00:30:59.515 CC lib/ftl/ftl_band.o 00:30:59.773 CC lib/ftl/ftl_band_ops.o 00:30:59.773 CC lib/ftl/ftl_writer.o 00:30:59.773 CC lib/iscsi/conn.o 00:31:00.031 CC lib/vhost/vhost.o 00:31:00.032 CC lib/ftl/ftl_rq.o 00:31:00.032 CC lib/vhost/vhost_rpc.o 00:31:00.032 CC lib/ftl/ftl_reloc.o 00:31:00.032 CC lib/iscsi/init_grp.o 00:31:00.032 CC lib/ftl/ftl_l2p_cache.o 00:31:00.291 CC lib/ftl/ftl_p2l.o 00:31:00.291 CC lib/ftl/ftl_p2l_log.o 00:31:00.550 CC lib/vhost/vhost_scsi.o 00:31:00.550 CC lib/ftl/mngt/ftl_mngt.o 00:31:00.550 CC lib/iscsi/iscsi.o 00:31:00.550 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:31:00.550 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:31:00.809 CC lib/vhost/vhost_blk.o 00:31:00.809 CC lib/ftl/mngt/ftl_mngt_startup.o 00:31:00.809 CC lib/iscsi/param.o 00:31:00.809 CC lib/vhost/rte_vhost_user.o 00:31:00.809 CC lib/ftl/mngt/ftl_mngt_md.o 00:31:00.809 CC lib/ftl/mngt/ftl_mngt_misc.o 00:31:00.809 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:31:01.067 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:31:01.067 CC lib/ftl/mngt/ftl_mngt_band.o 00:31:01.067 CC lib/iscsi/portal_grp.o 00:31:01.067 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:31:01.067 CC lib/iscsi/tgt_node.o 00:31:01.326 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:31:01.326 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:31:01.326 CC lib/iscsi/iscsi_subsystem.o 00:31:01.326 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:31:01.326 CC lib/ftl/utils/ftl_conf.o 00:31:01.585 CC lib/ftl/utils/ftl_md.o 00:31:01.585 CC lib/iscsi/iscsi_rpc.o 00:31:01.585 CC lib/iscsi/task.o 00:31:01.843 CC lib/ftl/utils/ftl_mempool.o 00:31:01.843 CC lib/ftl/utils/ftl_bitmap.o 00:31:01.843 CC lib/ftl/utils/ftl_property.o 00:31:01.843 LIB libspdk_vhost.a 00:31:01.843 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:31:01.843 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:31:01.843 LIB libspdk_nvmf.a 00:31:01.843 SO libspdk_vhost.so.8.0 00:31:01.843 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:31:01.843 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:31:02.101 SYMLINK libspdk_vhost.so 00:31:02.101 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:31:02.102 SO libspdk_nvmf.so.20.0 00:31:02.102 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:31:02.102 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:31:02.102 CC lib/ftl/upgrade/ftl_sb_v3.o 00:31:02.102 CC lib/ftl/upgrade/ftl_sb_v5.o 00:31:02.102 LIB libspdk_iscsi.a 00:31:02.102 CC lib/ftl/nvc/ftl_nvc_dev.o 00:31:02.102 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:31:02.102 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:31:02.360 SO libspdk_iscsi.so.8.0 00:31:02.360 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:31:02.360 CC lib/ftl/base/ftl_base_dev.o 00:31:02.360 SYMLINK libspdk_nvmf.so 00:31:02.360 CC lib/ftl/base/ftl_base_bdev.o 00:31:02.360 CC lib/ftl/ftl_trace.o 00:31:02.360 SYMLINK libspdk_iscsi.so 00:31:02.671 LIB libspdk_ftl.a 00:31:02.944 SO libspdk_ftl.so.9.0 00:31:03.202 SYMLINK libspdk_ftl.so 00:31:03.769 CC module/env_dpdk/env_dpdk_rpc.o 00:31:03.769 CC module/accel/ioat/accel_ioat.o 00:31:03.769 CC module/keyring/linux/keyring.o 00:31:03.769 CC module/accel/dsa/accel_dsa.o 00:31:03.769 CC module/blob/bdev/blob_bdev.o 00:31:03.769 CC module/fsdev/aio/fsdev_aio.o 00:31:03.769 CC module/sock/posix/posix.o 00:31:03.769 CC module/keyring/file/keyring.o 00:31:03.769 CC module/scheduler/dynamic/scheduler_dynamic.o 00:31:03.769 CC module/accel/error/accel_error.o 00:31:03.769 LIB libspdk_env_dpdk_rpc.a 00:31:03.769 SO libspdk_env_dpdk_rpc.so.6.0 00:31:03.769 CC module/keyring/linux/keyring_rpc.o 00:31:03.769 SYMLINK libspdk_env_dpdk_rpc.so 00:31:03.769 CC module/fsdev/aio/fsdev_aio_rpc.o 00:31:03.769 CC module/keyring/file/keyring_rpc.o 00:31:04.029 LIB libspdk_scheduler_dynamic.a 00:31:04.029 CC module/accel/ioat/accel_ioat_rpc.o 00:31:04.029 CC module/accel/error/accel_error_rpc.o 00:31:04.029 SO libspdk_scheduler_dynamic.so.4.0 00:31:04.029 LIB libspdk_keyring_linux.a 00:31:04.029 LIB libspdk_keyring_file.a 00:31:04.029 SO libspdk_keyring_linux.so.1.0 00:31:04.029 LIB libspdk_blob_bdev.a 00:31:04.029 CC module/accel/dsa/accel_dsa_rpc.o 00:31:04.029 SYMLINK libspdk_scheduler_dynamic.so 00:31:04.029 SO libspdk_keyring_file.so.2.0 00:31:04.029 SO libspdk_blob_bdev.so.12.0 00:31:04.029 SYMLINK libspdk_keyring_linux.so 00:31:04.029 LIB libspdk_accel_ioat.a 00:31:04.029 CC module/fsdev/aio/linux_aio_mgr.o 00:31:04.029 SYMLINK libspdk_keyring_file.so 00:31:04.029 SYMLINK libspdk_blob_bdev.so 00:31:04.029 LIB libspdk_accel_error.a 00:31:04.288 SO libspdk_accel_ioat.so.6.0 00:31:04.288 SO libspdk_accel_error.so.2.0 00:31:04.288 LIB libspdk_accel_dsa.a 00:31:04.288 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:31:04.288 SO libspdk_accel_dsa.so.5.0 00:31:04.288 SYMLINK libspdk_accel_ioat.so 00:31:04.288 SYMLINK libspdk_accel_error.so 00:31:04.288 CC module/scheduler/gscheduler/gscheduler.o 00:31:04.288 CC module/accel/iaa/accel_iaa.o 00:31:04.288 SYMLINK libspdk_accel_dsa.so 00:31:04.547 LIB libspdk_scheduler_dpdk_governor.a 00:31:04.547 CC module/bdev/delay/vbdev_delay.o 00:31:04.547 CC module/bdev/error/vbdev_error.o 00:31:04.547 LIB libspdk_scheduler_gscheduler.a 00:31:04.547 SO libspdk_scheduler_dpdk_governor.so.4.0 00:31:04.547 SO libspdk_scheduler_gscheduler.so.4.0 00:31:04.547 CC module/bdev/gpt/gpt.o 00:31:04.547 SYMLINK libspdk_scheduler_dpdk_governor.so 00:31:04.547 CC module/bdev/lvol/vbdev_lvol.o 00:31:04.547 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:31:04.547 CC module/accel/iaa/accel_iaa_rpc.o 00:31:04.547 CC module/blobfs/bdev/blobfs_bdev.o 00:31:04.547 SYMLINK libspdk_scheduler_gscheduler.so 00:31:04.547 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:31:04.547 LIB libspdk_sock_posix.a 00:31:04.547 LIB libspdk_fsdev_aio.a 00:31:04.547 SO libspdk_sock_posix.so.6.0 00:31:04.547 SO libspdk_fsdev_aio.so.1.0 00:31:04.547 LIB libspdk_accel_iaa.a 00:31:04.804 SO libspdk_accel_iaa.so.3.0 00:31:04.804 CC module/bdev/gpt/vbdev_gpt.o 00:31:04.804 SYMLINK libspdk_sock_posix.so 00:31:04.804 CC module/bdev/delay/vbdev_delay_rpc.o 00:31:04.804 CC module/bdev/error/vbdev_error_rpc.o 00:31:04.804 LIB libspdk_blobfs_bdev.a 00:31:04.804 SYMLINK libspdk_fsdev_aio.so 00:31:04.804 SO libspdk_blobfs_bdev.so.6.0 00:31:04.804 SYMLINK libspdk_accel_iaa.so 00:31:04.804 SYMLINK libspdk_blobfs_bdev.so 00:31:04.804 LIB libspdk_bdev_delay.a 00:31:04.804 LIB libspdk_bdev_error.a 00:31:04.804 CC module/bdev/malloc/bdev_malloc.o 00:31:04.804 SO libspdk_bdev_delay.so.6.0 00:31:05.062 CC module/bdev/null/bdev_null.o 00:31:05.062 SO libspdk_bdev_error.so.6.0 00:31:05.062 CC module/bdev/nvme/bdev_nvme.o 00:31:05.062 SYMLINK libspdk_bdev_delay.so 00:31:05.063 SYMLINK libspdk_bdev_error.so 00:31:05.063 CC module/bdev/malloc/bdev_malloc_rpc.o 00:31:05.063 LIB libspdk_bdev_gpt.a 00:31:05.063 CC module/bdev/passthru/vbdev_passthru.o 00:31:05.063 SO libspdk_bdev_gpt.so.6.0 00:31:05.063 CC module/bdev/raid/bdev_raid.o 00:31:05.063 CC module/bdev/split/vbdev_split.o 00:31:05.063 SYMLINK libspdk_bdev_gpt.so 00:31:05.063 LIB libspdk_bdev_lvol.a 00:31:05.321 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:31:05.321 SO libspdk_bdev_lvol.so.6.0 00:31:05.321 CC module/bdev/zone_block/vbdev_zone_block.o 00:31:05.321 CC module/bdev/null/bdev_null_rpc.o 00:31:05.321 SYMLINK libspdk_bdev_lvol.so 00:31:05.321 CC module/bdev/xnvme/bdev_xnvme.o 00:31:05.321 LIB libspdk_bdev_malloc.a 00:31:05.321 CC module/bdev/split/vbdev_split_rpc.o 00:31:05.321 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:31:05.321 SO libspdk_bdev_malloc.so.6.0 00:31:05.321 LIB libspdk_bdev_passthru.a 00:31:05.321 SO libspdk_bdev_passthru.so.6.0 00:31:05.321 LIB libspdk_bdev_null.a 00:31:05.580 CC module/bdev/aio/bdev_aio.o 00:31:05.580 SO libspdk_bdev_null.so.6.0 00:31:05.580 SYMLINK libspdk_bdev_malloc.so 00:31:05.580 CC module/bdev/aio/bdev_aio_rpc.o 00:31:05.580 SYMLINK libspdk_bdev_passthru.so 00:31:05.580 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:31:05.580 SYMLINK libspdk_bdev_null.so 00:31:05.580 LIB libspdk_bdev_split.a 00:31:05.580 CC module/bdev/raid/bdev_raid_rpc.o 00:31:05.580 SO libspdk_bdev_split.so.6.0 00:31:05.580 CC module/bdev/raid/bdev_raid_sb.o 00:31:05.580 LIB libspdk_bdev_xnvme.a 00:31:05.580 SO libspdk_bdev_xnvme.so.3.0 00:31:05.580 SYMLINK libspdk_bdev_split.so 00:31:05.839 CC module/bdev/raid/raid0.o 00:31:05.839 LIB libspdk_bdev_zone_block.a 00:31:05.839 CC module/bdev/ftl/bdev_ftl.o 00:31:05.839 CC module/bdev/raid/raid1.o 00:31:05.839 SYMLINK libspdk_bdev_xnvme.so 00:31:05.839 CC module/bdev/nvme/bdev_nvme_rpc.o 00:31:05.839 SO libspdk_bdev_zone_block.so.6.0 00:31:05.839 CC module/bdev/raid/concat.o 00:31:05.839 SYMLINK libspdk_bdev_zone_block.so 00:31:05.839 CC module/bdev/ftl/bdev_ftl_rpc.o 00:31:05.839 LIB libspdk_bdev_aio.a 00:31:05.839 SO libspdk_bdev_aio.so.6.0 00:31:05.839 SYMLINK libspdk_bdev_aio.so 00:31:06.097 CC module/bdev/nvme/nvme_rpc.o 00:31:06.097 CC module/bdev/nvme/bdev_mdns_client.o 00:31:06.097 CC module/bdev/nvme/vbdev_opal.o 00:31:06.097 LIB libspdk_bdev_ftl.a 00:31:06.097 CC module/bdev/nvme/vbdev_opal_rpc.o 00:31:06.097 SO libspdk_bdev_ftl.so.6.0 00:31:06.097 CC module/bdev/iscsi/bdev_iscsi.o 00:31:06.097 CC module/bdev/virtio/bdev_virtio_scsi.o 00:31:06.097 SYMLINK libspdk_bdev_ftl.so 00:31:06.097 CC module/bdev/virtio/bdev_virtio_blk.o 00:31:06.097 CC module/bdev/virtio/bdev_virtio_rpc.o 00:31:06.355 LIB libspdk_bdev_raid.a 00:31:06.355 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:31:06.355 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:31:06.355 SO libspdk_bdev_raid.so.6.0 00:31:06.355 SYMLINK libspdk_bdev_raid.so 00:31:06.623 LIB libspdk_bdev_iscsi.a 00:31:06.623 SO libspdk_bdev_iscsi.so.6.0 00:31:06.623 SYMLINK libspdk_bdev_iscsi.so 00:31:06.623 LIB libspdk_bdev_virtio.a 00:31:06.895 SO libspdk_bdev_virtio.so.6.0 00:31:06.895 SYMLINK libspdk_bdev_virtio.so 00:31:08.273 LIB libspdk_bdev_nvme.a 00:31:08.273 SO libspdk_bdev_nvme.so.7.1 00:31:08.273 SYMLINK libspdk_bdev_nvme.so 00:31:08.841 CC module/event/subsystems/vmd/vmd.o 00:31:08.841 CC module/event/subsystems/vmd/vmd_rpc.o 00:31:08.841 CC module/event/subsystems/iobuf/iobuf.o 00:31:08.841 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:31:08.841 CC module/event/subsystems/sock/sock.o 00:31:08.841 CC module/event/subsystems/scheduler/scheduler.o 00:31:08.841 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:31:08.841 CC module/event/subsystems/keyring/keyring.o 00:31:08.841 CC module/event/subsystems/fsdev/fsdev.o 00:31:09.099 LIB libspdk_event_vhost_blk.a 00:31:09.099 LIB libspdk_event_scheduler.a 00:31:09.099 LIB libspdk_event_vmd.a 00:31:09.099 LIB libspdk_event_keyring.a 00:31:09.099 LIB libspdk_event_iobuf.a 00:31:09.099 LIB libspdk_event_sock.a 00:31:09.099 LIB libspdk_event_fsdev.a 00:31:09.099 SO libspdk_event_vhost_blk.so.3.0 00:31:09.099 SO libspdk_event_scheduler.so.4.0 00:31:09.099 SO libspdk_event_vmd.so.6.0 00:31:09.099 SO libspdk_event_keyring.so.1.0 00:31:09.099 SO libspdk_event_iobuf.so.3.0 00:31:09.099 SO libspdk_event_sock.so.5.0 00:31:09.099 SO libspdk_event_fsdev.so.1.0 00:31:09.099 SYMLINK libspdk_event_scheduler.so 00:31:09.099 SYMLINK libspdk_event_keyring.so 00:31:09.099 SYMLINK libspdk_event_vhost_blk.so 00:31:09.099 SYMLINK libspdk_event_vmd.so 00:31:09.099 SYMLINK libspdk_event_iobuf.so 00:31:09.099 SYMLINK libspdk_event_sock.so 00:31:09.099 SYMLINK libspdk_event_fsdev.so 00:31:09.666 CC module/event/subsystems/accel/accel.o 00:31:09.666 LIB libspdk_event_accel.a 00:31:09.924 SO libspdk_event_accel.so.6.0 00:31:09.925 SYMLINK libspdk_event_accel.so 00:31:10.183 CC module/event/subsystems/bdev/bdev.o 00:31:10.441 LIB libspdk_event_bdev.a 00:31:10.441 SO libspdk_event_bdev.so.6.0 00:31:10.700 SYMLINK libspdk_event_bdev.so 00:31:10.959 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:31:10.959 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:31:10.959 CC module/event/subsystems/scsi/scsi.o 00:31:10.959 CC module/event/subsystems/nbd/nbd.o 00:31:10.959 CC module/event/subsystems/ublk/ublk.o 00:31:11.218 LIB libspdk_event_ublk.a 00:31:11.218 LIB libspdk_event_nbd.a 00:31:11.218 LIB libspdk_event_scsi.a 00:31:11.218 SO libspdk_event_nbd.so.6.0 00:31:11.218 SO libspdk_event_ublk.so.3.0 00:31:11.218 SO libspdk_event_scsi.so.6.0 00:31:11.218 LIB libspdk_event_nvmf.a 00:31:11.218 SYMLINK libspdk_event_nbd.so 00:31:11.218 SO libspdk_event_nvmf.so.6.0 00:31:11.218 SYMLINK libspdk_event_scsi.so 00:31:11.218 SYMLINK libspdk_event_ublk.so 00:31:11.218 SYMLINK libspdk_event_nvmf.so 00:31:11.477 CC module/event/subsystems/iscsi/iscsi.o 00:31:11.736 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:31:11.736 LIB libspdk_event_iscsi.a 00:31:11.736 LIB libspdk_event_vhost_scsi.a 00:31:11.736 SO libspdk_event_iscsi.so.6.0 00:31:11.736 SO libspdk_event_vhost_scsi.so.3.0 00:31:11.995 SYMLINK libspdk_event_vhost_scsi.so 00:31:11.995 SYMLINK libspdk_event_iscsi.so 00:31:11.995 SO libspdk.so.6.0 00:31:11.995 SYMLINK libspdk.so 00:31:12.561 CXX app/trace/trace.o 00:31:12.561 CC app/trace_record/trace_record.o 00:31:12.561 TEST_HEADER include/spdk/accel.h 00:31:12.561 TEST_HEADER include/spdk/accel_module.h 00:31:12.561 TEST_HEADER include/spdk/assert.h 00:31:12.561 TEST_HEADER include/spdk/barrier.h 00:31:12.561 TEST_HEADER include/spdk/base64.h 00:31:12.561 TEST_HEADER include/spdk/bdev.h 00:31:12.561 TEST_HEADER include/spdk/bdev_module.h 00:31:12.561 TEST_HEADER include/spdk/bdev_zone.h 00:31:12.561 TEST_HEADER include/spdk/bit_array.h 00:31:12.561 TEST_HEADER include/spdk/bit_pool.h 00:31:12.561 TEST_HEADER include/spdk/blob_bdev.h 00:31:12.561 TEST_HEADER include/spdk/blobfs_bdev.h 00:31:12.561 CC examples/interrupt_tgt/interrupt_tgt.o 00:31:12.561 TEST_HEADER include/spdk/blobfs.h 00:31:12.561 TEST_HEADER include/spdk/blob.h 00:31:12.561 TEST_HEADER include/spdk/conf.h 00:31:12.561 TEST_HEADER include/spdk/config.h 00:31:12.561 TEST_HEADER include/spdk/cpuset.h 00:31:12.561 TEST_HEADER include/spdk/crc16.h 00:31:12.561 TEST_HEADER include/spdk/crc32.h 00:31:12.561 TEST_HEADER include/spdk/crc64.h 00:31:12.561 TEST_HEADER include/spdk/dif.h 00:31:12.561 TEST_HEADER include/spdk/dma.h 00:31:12.561 TEST_HEADER include/spdk/endian.h 00:31:12.561 TEST_HEADER include/spdk/env_dpdk.h 00:31:12.561 TEST_HEADER include/spdk/env.h 00:31:12.561 TEST_HEADER include/spdk/event.h 00:31:12.561 TEST_HEADER include/spdk/fd_group.h 00:31:12.561 TEST_HEADER include/spdk/fd.h 00:31:12.561 TEST_HEADER include/spdk/file.h 00:31:12.561 TEST_HEADER include/spdk/fsdev.h 00:31:12.561 TEST_HEADER include/spdk/fsdev_module.h 00:31:12.561 TEST_HEADER include/spdk/ftl.h 00:31:12.561 TEST_HEADER include/spdk/fuse_dispatcher.h 00:31:12.561 TEST_HEADER include/spdk/gpt_spec.h 00:31:12.561 TEST_HEADER include/spdk/hexlify.h 00:31:12.561 TEST_HEADER include/spdk/histogram_data.h 00:31:12.561 TEST_HEADER include/spdk/idxd.h 00:31:12.561 TEST_HEADER include/spdk/idxd_spec.h 00:31:12.561 TEST_HEADER include/spdk/init.h 00:31:12.561 CC examples/util/zipf/zipf.o 00:31:12.561 TEST_HEADER include/spdk/ioat.h 00:31:12.561 TEST_HEADER include/spdk/ioat_spec.h 00:31:12.561 CC examples/ioat/perf/perf.o 00:31:12.561 TEST_HEADER include/spdk/iscsi_spec.h 00:31:12.561 TEST_HEADER include/spdk/json.h 00:31:12.561 CC test/dma/test_dma/test_dma.o 00:31:12.561 TEST_HEADER include/spdk/jsonrpc.h 00:31:12.561 TEST_HEADER include/spdk/keyring.h 00:31:12.561 TEST_HEADER include/spdk/keyring_module.h 00:31:12.561 CC test/thread/poller_perf/poller_perf.o 00:31:12.561 TEST_HEADER include/spdk/likely.h 00:31:12.561 TEST_HEADER include/spdk/log.h 00:31:12.561 CC test/app/bdev_svc/bdev_svc.o 00:31:12.561 TEST_HEADER include/spdk/lvol.h 00:31:12.561 TEST_HEADER include/spdk/md5.h 00:31:12.561 TEST_HEADER include/spdk/memory.h 00:31:12.561 TEST_HEADER include/spdk/mmio.h 00:31:12.561 TEST_HEADER include/spdk/nbd.h 00:31:12.561 TEST_HEADER include/spdk/net.h 00:31:12.561 TEST_HEADER include/spdk/notify.h 00:31:12.561 TEST_HEADER include/spdk/nvme.h 00:31:12.561 TEST_HEADER include/spdk/nvme_intel.h 00:31:12.561 TEST_HEADER include/spdk/nvme_ocssd.h 00:31:12.561 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:31:12.561 TEST_HEADER include/spdk/nvme_spec.h 00:31:12.561 TEST_HEADER include/spdk/nvme_zns.h 00:31:12.561 TEST_HEADER include/spdk/nvmf_cmd.h 00:31:12.561 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:31:12.561 TEST_HEADER include/spdk/nvmf.h 00:31:12.561 TEST_HEADER include/spdk/nvmf_spec.h 00:31:12.561 TEST_HEADER include/spdk/nvmf_transport.h 00:31:12.561 TEST_HEADER include/spdk/opal.h 00:31:12.561 TEST_HEADER include/spdk/opal_spec.h 00:31:12.561 LINK interrupt_tgt 00:31:12.561 TEST_HEADER include/spdk/pci_ids.h 00:31:12.820 TEST_HEADER include/spdk/pipe.h 00:31:12.820 CC test/env/mem_callbacks/mem_callbacks.o 00:31:12.820 TEST_HEADER include/spdk/queue.h 00:31:12.820 TEST_HEADER include/spdk/reduce.h 00:31:12.820 TEST_HEADER include/spdk/rpc.h 00:31:12.820 TEST_HEADER include/spdk/scheduler.h 00:31:12.820 TEST_HEADER include/spdk/scsi.h 00:31:12.820 TEST_HEADER include/spdk/scsi_spec.h 00:31:12.820 TEST_HEADER include/spdk/sock.h 00:31:12.820 TEST_HEADER include/spdk/stdinc.h 00:31:12.820 TEST_HEADER include/spdk/string.h 00:31:12.820 TEST_HEADER include/spdk/thread.h 00:31:12.820 TEST_HEADER include/spdk/trace.h 00:31:12.820 TEST_HEADER include/spdk/trace_parser.h 00:31:12.820 LINK spdk_trace_record 00:31:12.820 TEST_HEADER include/spdk/tree.h 00:31:12.820 TEST_HEADER include/spdk/ublk.h 00:31:12.820 LINK zipf 00:31:12.820 TEST_HEADER include/spdk/util.h 00:31:12.820 TEST_HEADER include/spdk/uuid.h 00:31:12.820 TEST_HEADER include/spdk/version.h 00:31:12.820 TEST_HEADER include/spdk/vfio_user_pci.h 00:31:12.820 TEST_HEADER include/spdk/vfio_user_spec.h 00:31:12.820 TEST_HEADER include/spdk/vhost.h 00:31:12.820 TEST_HEADER include/spdk/vmd.h 00:31:12.820 TEST_HEADER include/spdk/xor.h 00:31:12.820 TEST_HEADER include/spdk/zipf.h 00:31:12.820 CXX test/cpp_headers/accel.o 00:31:12.820 LINK poller_perf 00:31:12.820 LINK ioat_perf 00:31:12.820 LINK bdev_svc 00:31:12.820 LINK spdk_trace 00:31:12.820 CXX test/cpp_headers/accel_module.o 00:31:13.078 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:31:13.078 CC test/env/vtophys/vtophys.o 00:31:13.078 CC test/env/memory/memory_ut.o 00:31:13.078 CC examples/ioat/verify/verify.o 00:31:13.078 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:31:13.078 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:31:13.078 CXX test/cpp_headers/assert.o 00:31:13.078 LINK env_dpdk_post_init 00:31:13.078 LINK vtophys 00:31:13.078 LINK test_dma 00:31:13.078 CC app/nvmf_tgt/nvmf_main.o 00:31:13.336 LINK mem_callbacks 00:31:13.336 LINK verify 00:31:13.336 CXX test/cpp_headers/barrier.o 00:31:13.336 LINK nvmf_tgt 00:31:13.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:31:13.336 CC test/app/histogram_perf/histogram_perf.o 00:31:13.593 CC test/app/jsoncat/jsoncat.o 00:31:13.593 CC test/env/pci/pci_ut.o 00:31:13.593 CXX test/cpp_headers/base64.o 00:31:13.593 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:31:13.593 LINK nvme_fuzz 00:31:13.593 LINK histogram_perf 00:31:13.593 LINK jsoncat 00:31:13.593 CC examples/thread/thread/thread_ex.o 00:31:13.593 CXX test/cpp_headers/bdev.o 00:31:13.850 CC app/iscsi_tgt/iscsi_tgt.o 00:31:13.850 CC test/app/stub/stub.o 00:31:13.850 CXX test/cpp_headers/bdev_module.o 00:31:13.850 LINK pci_ut 00:31:13.850 LINK thread 00:31:13.850 LINK iscsi_tgt 00:31:13.850 CC examples/sock/hello_world/hello_sock.o 00:31:13.850 CC examples/vmd/lsvmd/lsvmd.o 00:31:14.107 LINK stub 00:31:14.107 LINK vhost_fuzz 00:31:14.107 CXX test/cpp_headers/bdev_zone.o 00:31:14.107 LINK lsvmd 00:31:14.365 LINK hello_sock 00:31:14.365 CXX test/cpp_headers/bit_array.o 00:31:14.365 LINK memory_ut 00:31:14.365 CC app/spdk_tgt/spdk_tgt.o 00:31:14.365 CC examples/idxd/perf/perf.o 00:31:14.365 CC examples/vmd/led/led.o 00:31:14.365 CC examples/accel/perf/accel_perf.o 00:31:14.365 CC examples/fsdev/hello_world/hello_fsdev.o 00:31:14.365 CC examples/blob/hello_world/hello_blob.o 00:31:14.365 CXX test/cpp_headers/bit_pool.o 00:31:14.623 LINK led 00:31:14.623 LINK spdk_tgt 00:31:14.623 CC examples/blob/cli/blobcli.o 00:31:14.623 CC test/rpc_client/rpc_client_test.o 00:31:14.623 CXX test/cpp_headers/blob_bdev.o 00:31:14.623 LINK hello_blob 00:31:14.623 LINK hello_fsdev 00:31:14.623 LINK idxd_perf 00:31:14.908 LINK rpc_client_test 00:31:14.908 CXX test/cpp_headers/blobfs_bdev.o 00:31:14.908 CC app/spdk_lspci/spdk_lspci.o 00:31:14.908 CC examples/nvme/hello_world/hello_world.o 00:31:14.908 LINK accel_perf 00:31:14.908 CXX test/cpp_headers/blobfs.o 00:31:14.908 CC examples/nvme/reconnect/reconnect.o 00:31:15.192 CXX test/cpp_headers/blob.o 00:31:15.192 LINK spdk_lspci 00:31:15.192 CC examples/nvme/nvme_manage/nvme_manage.o 00:31:15.192 CC test/accel/dif/dif.o 00:31:15.192 LINK hello_world 00:31:15.192 LINK blobcli 00:31:15.192 CXX test/cpp_headers/config.o 00:31:15.192 CXX test/cpp_headers/conf.o 00:31:15.192 LINK iscsi_fuzz 00:31:15.192 CC app/spdk_nvme_perf/perf.o 00:31:15.192 CC app/spdk_nvme_identify/identify.o 00:31:15.452 CC examples/nvme/arbitration/arbitration.o 00:31:15.452 CXX test/cpp_headers/cpuset.o 00:31:15.452 LINK reconnect 00:31:15.452 CC app/spdk_nvme_discover/discovery_aer.o 00:31:15.452 CXX test/cpp_headers/crc16.o 00:31:15.711 CC examples/bdev/hello_world/hello_bdev.o 00:31:15.711 CXX test/cpp_headers/crc32.o 00:31:15.711 LINK nvme_manage 00:31:15.711 CC examples/nvme/hotplug/hotplug.o 00:31:15.711 CC examples/bdev/bdevperf/bdevperf.o 00:31:15.711 LINK spdk_nvme_discover 00:31:15.711 LINK arbitration 00:31:15.711 CXX test/cpp_headers/crc64.o 00:31:15.970 LINK hello_bdev 00:31:15.970 CXX test/cpp_headers/dif.o 00:31:15.970 LINK dif 00:31:15.970 LINK hotplug 00:31:15.970 CC examples/nvme/cmb_copy/cmb_copy.o 00:31:15.970 CC examples/nvme/abort/abort.o 00:31:16.228 CXX test/cpp_headers/dma.o 00:31:16.228 LINK cmb_copy 00:31:16.228 CC test/blobfs/mkfs/mkfs.o 00:31:16.228 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:31:16.228 CC app/spdk_top/spdk_top.o 00:31:16.228 LINK spdk_nvme_identify 00:31:16.228 CXX test/cpp_headers/endian.o 00:31:16.228 CC test/event/event_perf/event_perf.o 00:31:16.486 LINK mkfs 00:31:16.486 LINK spdk_nvme_perf 00:31:16.486 CC test/event/reactor/reactor.o 00:31:16.486 LINK pmr_persistence 00:31:16.486 CXX test/cpp_headers/env_dpdk.o 00:31:16.486 LINK abort 00:31:16.486 CXX test/cpp_headers/env.o 00:31:16.486 LINK event_perf 00:31:16.486 LINK reactor 00:31:16.745 CXX test/cpp_headers/event.o 00:31:16.745 CXX test/cpp_headers/fd_group.o 00:31:16.745 LINK bdevperf 00:31:16.745 CC app/vhost/vhost.o 00:31:16.745 CC app/spdk_dd/spdk_dd.o 00:31:16.745 CC test/event/reactor_perf/reactor_perf.o 00:31:16.745 CC test/nvme/aer/aer.o 00:31:16.745 CXX test/cpp_headers/fd.o 00:31:17.004 CC test/lvol/esnap/esnap.o 00:31:17.004 CC test/bdev/bdevio/bdevio.o 00:31:17.004 CC test/nvme/reset/reset.o 00:31:17.004 LINK vhost 00:31:17.004 LINK reactor_perf 00:31:17.004 CXX test/cpp_headers/file.o 00:31:17.004 CC examples/nvmf/nvmf/nvmf.o 00:31:17.263 LINK aer 00:31:17.263 LINK spdk_dd 00:31:17.263 CXX test/cpp_headers/fsdev.o 00:31:17.263 LINK reset 00:31:17.263 CC test/event/app_repeat/app_repeat.o 00:31:17.263 LINK spdk_top 00:31:17.263 CC test/event/scheduler/scheduler.o 00:31:17.263 CXX test/cpp_headers/fsdev_module.o 00:31:17.263 LINK bdevio 00:31:17.522 CC test/nvme/sgl/sgl.o 00:31:17.522 LINK app_repeat 00:31:17.522 CXX test/cpp_headers/ftl.o 00:31:17.522 LINK nvmf 00:31:17.522 CC test/nvme/e2edp/nvme_dp.o 00:31:17.522 CXX test/cpp_headers/fuse_dispatcher.o 00:31:17.522 LINK scheduler 00:31:17.522 CC app/fio/nvme/fio_plugin.o 00:31:17.522 CXX test/cpp_headers/gpt_spec.o 00:31:17.522 CXX test/cpp_headers/hexlify.o 00:31:17.781 CXX test/cpp_headers/histogram_data.o 00:31:17.781 CC app/fio/bdev/fio_plugin.o 00:31:17.781 LINK sgl 00:31:17.781 CXX test/cpp_headers/idxd.o 00:31:17.782 CXX test/cpp_headers/idxd_spec.o 00:31:17.782 LINK nvme_dp 00:31:17.782 CXX test/cpp_headers/init.o 00:31:17.782 CC test/nvme/overhead/overhead.o 00:31:17.782 CC test/nvme/err_injection/err_injection.o 00:31:18.041 CC test/nvme/startup/startup.o 00:31:18.041 CC test/nvme/reserve/reserve.o 00:31:18.041 CXX test/cpp_headers/ioat.o 00:31:18.041 CC test/nvme/simple_copy/simple_copy.o 00:31:18.041 CC test/nvme/connect_stress/connect_stress.o 00:31:18.041 LINK err_injection 00:31:18.041 LINK startup 00:31:18.041 LINK overhead 00:31:18.301 LINK spdk_nvme 00:31:18.301 CXX test/cpp_headers/ioat_spec.o 00:31:18.301 LINK reserve 00:31:18.301 LINK spdk_bdev 00:31:18.301 LINK connect_stress 00:31:18.301 LINK simple_copy 00:31:18.301 CXX test/cpp_headers/iscsi_spec.o 00:31:18.301 CC test/nvme/boot_partition/boot_partition.o 00:31:18.301 CC test/nvme/compliance/nvme_compliance.o 00:31:18.301 CXX test/cpp_headers/json.o 00:31:18.560 CC test/nvme/fused_ordering/fused_ordering.o 00:31:18.560 CC test/nvme/doorbell_aers/doorbell_aers.o 00:31:18.560 CC test/nvme/fdp/fdp.o 00:31:18.560 CXX test/cpp_headers/jsonrpc.o 00:31:18.560 CXX test/cpp_headers/keyring.o 00:31:18.560 CC test/nvme/cuse/cuse.o 00:31:18.560 LINK boot_partition 00:31:18.560 CXX test/cpp_headers/keyring_module.o 00:31:18.560 LINK fused_ordering 00:31:18.560 CXX test/cpp_headers/likely.o 00:31:18.560 CXX test/cpp_headers/log.o 00:31:18.560 LINK doorbell_aers 00:31:18.874 CXX test/cpp_headers/lvol.o 00:31:18.874 LINK nvme_compliance 00:31:18.874 CXX test/cpp_headers/md5.o 00:31:18.874 CXX test/cpp_headers/memory.o 00:31:18.874 CXX test/cpp_headers/mmio.o 00:31:18.874 CXX test/cpp_headers/nbd.o 00:31:18.874 CXX test/cpp_headers/net.o 00:31:18.874 LINK fdp 00:31:18.874 CXX test/cpp_headers/notify.o 00:31:18.874 CXX test/cpp_headers/nvme.o 00:31:18.874 CXX test/cpp_headers/nvme_intel.o 00:31:19.132 CXX test/cpp_headers/nvme_ocssd.o 00:31:19.132 CXX test/cpp_headers/nvme_ocssd_spec.o 00:31:19.132 CXX test/cpp_headers/nvme_spec.o 00:31:19.132 CXX test/cpp_headers/nvme_zns.o 00:31:19.132 CXX test/cpp_headers/nvmf_cmd.o 00:31:19.132 CXX test/cpp_headers/nvmf_fc_spec.o 00:31:19.132 CXX test/cpp_headers/nvmf.o 00:31:19.132 CXX test/cpp_headers/nvmf_spec.o 00:31:19.132 CXX test/cpp_headers/nvmf_transport.o 00:31:19.132 CXX test/cpp_headers/opal.o 00:31:19.132 CXX test/cpp_headers/opal_spec.o 00:31:19.132 CXX test/cpp_headers/pci_ids.o 00:31:19.132 CXX test/cpp_headers/pipe.o 00:31:19.132 CXX test/cpp_headers/queue.o 00:31:19.391 CXX test/cpp_headers/reduce.o 00:31:19.391 CXX test/cpp_headers/rpc.o 00:31:19.391 CXX test/cpp_headers/scheduler.o 00:31:19.391 CXX test/cpp_headers/scsi.o 00:31:19.391 CXX test/cpp_headers/scsi_spec.o 00:31:19.391 CXX test/cpp_headers/sock.o 00:31:19.391 CXX test/cpp_headers/stdinc.o 00:31:19.391 CXX test/cpp_headers/string.o 00:31:19.391 CXX test/cpp_headers/thread.o 00:31:19.391 CXX test/cpp_headers/trace.o 00:31:19.391 CXX test/cpp_headers/trace_parser.o 00:31:19.651 CXX test/cpp_headers/tree.o 00:31:19.651 CXX test/cpp_headers/ublk.o 00:31:19.651 CXX test/cpp_headers/util.o 00:31:19.651 CXX test/cpp_headers/uuid.o 00:31:19.651 CXX test/cpp_headers/version.o 00:31:19.651 CXX test/cpp_headers/vfio_user_pci.o 00:31:19.651 CXX test/cpp_headers/vfio_user_spec.o 00:31:19.651 CXX test/cpp_headers/vhost.o 00:31:19.651 CXX test/cpp_headers/vmd.o 00:31:19.651 CXX test/cpp_headers/xor.o 00:31:19.651 CXX test/cpp_headers/zipf.o 00:31:19.909 LINK cuse 00:31:23.233 LINK esnap 00:31:23.808 00:31:23.808 real 1m25.957s 00:31:23.808 user 7m28.796s 00:31:23.808 sys 1m53.653s 00:31:23.808 17:28:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:31:23.808 ************************************ 00:31:23.808 END TEST make 00:31:23.808 ************************************ 00:31:23.808 17:28:24 make -- common/autotest_common.sh@10 -- $ set +x 00:31:23.808 17:28:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:31:23.808 17:28:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:23.808 17:28:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:23.808 17:28:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.808 17:28:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:23.808 17:28:24 -- pm/common@44 -- $ pid=5291 00:31:23.808 17:28:24 -- pm/common@50 -- $ kill -TERM 5291 00:31:23.808 17:28:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.808 17:28:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:23.808 17:28:24 -- pm/common@44 -- $ pid=5292 00:31:23.808 17:28:24 -- pm/common@50 -- $ kill -TERM 5292 00:31:23.808 17:28:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:31:23.808 17:28:24 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:31:23.808 17:28:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:23.808 17:28:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:31:23.808 17:28:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:24.070 17:28:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:24.070 17:28:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.070 17:28:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.070 17:28:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.070 17:28:24 -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.070 17:28:24 -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.070 17:28:24 -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.070 17:28:24 -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.070 17:28:24 -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.070 17:28:24 -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.070 17:28:24 -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.070 17:28:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.070 17:28:24 -- scripts/common.sh@344 -- # case "$op" in 00:31:24.070 17:28:24 -- scripts/common.sh@345 -- # : 1 00:31:24.070 17:28:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.070 17:28:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.070 17:28:24 -- scripts/common.sh@365 -- # decimal 1 00:31:24.070 17:28:24 -- scripts/common.sh@353 -- # local d=1 00:31:24.070 17:28:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.070 17:28:24 -- scripts/common.sh@355 -- # echo 1 00:31:24.070 17:28:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.070 17:28:24 -- scripts/common.sh@366 -- # decimal 2 00:31:24.070 17:28:24 -- scripts/common.sh@353 -- # local d=2 00:31:24.070 17:28:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.070 17:28:24 -- scripts/common.sh@355 -- # echo 2 00:31:24.070 17:28:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.070 17:28:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.070 17:28:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.070 17:28:24 -- scripts/common.sh@368 -- # return 0 00:31:24.070 17:28:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.070 17:28:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:24.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.070 --rc genhtml_branch_coverage=1 00:31:24.070 --rc genhtml_function_coverage=1 00:31:24.070 --rc genhtml_legend=1 00:31:24.070 --rc geninfo_all_blocks=1 00:31:24.070 --rc geninfo_unexecuted_blocks=1 00:31:24.070 00:31:24.070 ' 00:31:24.070 17:28:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:24.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.070 --rc genhtml_branch_coverage=1 00:31:24.070 --rc genhtml_function_coverage=1 00:31:24.070 --rc genhtml_legend=1 00:31:24.070 --rc geninfo_all_blocks=1 00:31:24.070 --rc geninfo_unexecuted_blocks=1 00:31:24.070 00:31:24.070 ' 00:31:24.070 17:28:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:24.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.070 --rc genhtml_branch_coverage=1 00:31:24.070 --rc genhtml_function_coverage=1 00:31:24.070 --rc genhtml_legend=1 00:31:24.070 --rc geninfo_all_blocks=1 00:31:24.070 --rc geninfo_unexecuted_blocks=1 00:31:24.070 00:31:24.070 ' 00:31:24.070 17:28:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:24.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.070 --rc genhtml_branch_coverage=1 00:31:24.070 --rc genhtml_function_coverage=1 00:31:24.070 --rc genhtml_legend=1 00:31:24.070 --rc geninfo_all_blocks=1 00:31:24.070 --rc geninfo_unexecuted_blocks=1 00:31:24.070 00:31:24.070 ' 00:31:24.070 17:28:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:24.070 17:28:24 -- nvmf/common.sh@7 -- # uname -s 00:31:24.070 17:28:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.070 17:28:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.070 17:28:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.070 17:28:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.070 17:28:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.070 17:28:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.070 17:28:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.070 17:28:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.070 17:28:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.070 17:28:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.070 17:28:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:31:24.070 17:28:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:31:24.070 17:28:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.070 17:28:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.070 17:28:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:24.070 17:28:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.070 17:28:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:24.070 17:28:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.070 17:28:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.070 17:28:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.070 17:28:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.070 17:28:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.070 17:28:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.070 17:28:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.070 17:28:24 -- paths/export.sh@5 -- # export PATH 00:31:24.070 17:28:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.071 17:28:24 -- nvmf/common.sh@51 -- # : 0 00:31:24.071 17:28:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.071 17:28:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.071 17:28:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.071 17:28:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.071 17:28:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.071 17:28:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:24.071 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:24.071 17:28:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.071 17:28:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.071 17:28:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.071 17:28:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:31:24.071 17:28:24 -- spdk/autotest.sh@32 -- # uname -s 00:31:24.071 17:28:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:31:24.071 17:28:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:31:24.071 17:28:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:31:24.071 17:28:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:31:24.071 17:28:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:31:24.071 17:28:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:31:24.071 17:28:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:31:24.071 17:28:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:31:24.071 17:28:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54786 00:31:24.071 17:28:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:31:24.071 17:28:24 -- pm/common@17 -- # local monitor 00:31:24.071 17:28:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.071 17:28:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:31:24.071 17:28:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:31:24.071 17:28:24 -- pm/common@21 -- # date +%s 00:31:24.071 17:28:24 -- pm/common@21 -- # date +%s 00:31:24.071 17:28:24 -- pm/common@25 -- # sleep 1 00:31:24.071 17:28:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732642104 00:31:24.071 17:28:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732642104 00:31:24.071 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732642104_collect-vmstat.pm.log 00:31:24.071 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732642104_collect-cpu-load.pm.log 00:31:25.010 17:28:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:31:25.010 17:28:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:31:25.010 17:28:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.010 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:31:25.010 17:28:25 -- spdk/autotest.sh@59 -- # create_test_list 00:31:25.269 17:28:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:31:25.269 17:28:25 -- common/autotest_common.sh@10 -- # set +x 00:31:25.269 17:28:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:31:25.269 17:28:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:31:25.269 17:28:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:31:25.269 17:28:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:31:25.269 17:28:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:31:25.269 17:28:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:31:25.269 17:28:25 -- common/autotest_common.sh@1457 -- # uname 00:31:25.269 17:28:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:31:25.269 17:28:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:31:25.269 17:28:25 -- common/autotest_common.sh@1477 -- # uname 00:31:25.269 17:28:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:31:25.269 17:28:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:31:25.269 17:28:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:31:25.269 lcov: LCOV version 1.15 00:31:25.269 17:28:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:31:43.394 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:31:43.394 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:31:58.280 17:28:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:31:58.280 17:28:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.280 17:28:57 -- common/autotest_common.sh@10 -- # set +x 00:31:58.280 17:28:57 -- spdk/autotest.sh@78 -- # rm -f 00:31:58.280 17:28:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:58.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:58.280 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:58.280 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:58.280 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:58.280 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:58.538 17:28:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:31:58.538 17:28:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:31:58.538 17:28:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:31:58.538 17:28:58 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:31:58.538 17:28:58 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:31:58.538 17:28:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:31:58.538 17:28:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:58.538 17:28:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:31:58.538 17:28:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.538 17:28:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.538 17:28:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:31:58.538 17:28:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:31:58.538 17:28:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:31:58.538 No valid GPT data, bailing 00:31:58.538 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:58.538 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:58.538 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:58.538 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:31:58.538 1+0 records in 00:31:58.538 1+0 records out 00:31:58.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190031 s, 55.2 MB/s 00:31:58.538 17:28:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.538 17:28:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.538 17:28:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:31:58.538 17:28:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:31:58.538 17:28:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:31:58.538 No valid GPT data, bailing 00:31:58.538 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:58.538 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:58.538 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:58.538 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:31:58.538 1+0 records in 00:31:58.538 1+0 records out 00:31:58.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402769 s, 260 MB/s 00:31:58.538 17:28:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.538 17:28:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.538 17:28:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:31:58.538 17:28:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:31:58.538 17:28:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:31:58.538 No valid GPT data, bailing 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:58.796 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:58.796 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:31:58.796 1+0 records in 00:31:58.796 1+0 records out 00:31:58.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605038 s, 173 MB/s 00:31:58.796 17:28:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.796 17:28:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.796 17:28:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:31:58.796 17:28:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:31:58.796 17:28:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:31:58.796 No valid GPT data, bailing 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:58.796 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:58.796 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:31:58.796 1+0 records in 00:31:58.796 1+0 records out 00:31:58.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591786 s, 177 MB/s 00:31:58.796 17:28:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.796 17:28:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.796 17:28:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:31:58.796 17:28:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:31:58.796 17:28:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:31:58.796 No valid GPT data, bailing 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:31:58.796 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:58.796 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:58.796 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:31:58.796 1+0 records in 00:31:58.796 1+0 records out 00:31:58.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601196 s, 174 MB/s 00:31:58.796 17:28:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:31:58.796 17:28:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:31:58.796 17:28:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:31:58.796 17:28:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:31:58.796 17:28:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:31:59.054 No valid GPT data, bailing 00:31:59.054 17:28:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:31:59.054 17:28:59 -- scripts/common.sh@394 -- # pt= 00:31:59.054 17:28:59 -- scripts/common.sh@395 -- # return 1 00:31:59.054 17:28:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:31:59.054 1+0 records in 00:31:59.054 1+0 records out 00:31:59.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587532 s, 178 MB/s 00:31:59.054 17:28:59 -- spdk/autotest.sh@105 -- # sync 00:31:59.054 17:28:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:31:59.054 17:28:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:31:59.054 17:28:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:32:01.620 17:29:02 -- spdk/autotest.sh@111 -- # uname -s 00:32:01.620 17:29:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:32:01.620 17:29:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:32:01.620 17:29:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:32:02.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:02.815 Hugepages 00:32:02.815 node hugesize free / total 00:32:02.815 node0 1048576kB 0 / 0 00:32:02.815 node0 2048kB 0 / 0 00:32:02.815 00:32:02.815 Type BDF Vendor Device NUMA Driver Device Block devices 00:32:03.073 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:32:03.073 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:32:03.073 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:32:03.331 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:32:03.331 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:32:03.331 17:29:03 -- spdk/autotest.sh@117 -- # uname -s 00:32:03.331 17:29:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:32:03.331 17:29:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:32:03.331 17:29:03 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:03.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:04.505 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:04.765 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:04.765 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:04.765 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:04.765 17:29:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:32:06.141 17:29:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:32:06.141 17:29:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:32:06.141 17:29:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:32:06.141 17:29:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:32:06.141 17:29:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:06.141 17:29:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:06.141 17:29:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:06.141 17:29:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:06.141 17:29:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:06.141 17:29:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:06.141 17:29:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:06.141 17:29:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:06.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:06.659 Waiting for block devices as requested 00:32:06.659 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:06.917 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:06.917 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:07.175 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:12.442 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:12.442 17:29:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:32:12.442 17:29:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:32:12.442 17:29:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:32:12.442 17:29:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:32:12.442 17:29:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:32:12.442 17:29:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:32:12.442 17:29:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:32:12.442 17:29:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:32:12.442 17:29:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:32:12.442 17:29:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:32:12.443 17:29:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1543 -- # continue 00:32:12.443 17:29:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:32:12.443 17:29:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1543 -- # continue 00:32:12.443 17:29:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:32:12.443 17:29:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:32:12.443 17:29:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:32:12.443 17:29:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1543 -- # continue 00:32:12.443 17:29:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:32:12.443 17:29:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:32:12.443 17:29:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:32:12.443 17:29:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:32:12.443 17:29:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:32:12.443 17:29:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:32:12.443 17:29:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:32:12.443 17:29:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:32:12.443 17:29:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:32:12.443 17:29:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:32:12.443 17:29:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:32:12.443 17:29:13 -- common/autotest_common.sh@1543 -- # continue 00:32:12.443 17:29:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:32:12.443 17:29:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.443 17:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:12.443 17:29:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:32:12.443 17:29:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.443 17:29:13 -- common/autotest_common.sh@10 -- # set +x 00:32:12.443 17:29:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:13.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:13.947 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:13.947 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:13.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:14.206 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:14.206 17:29:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:32:14.206 17:29:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.206 17:29:14 -- common/autotest_common.sh@10 -- # set +x 00:32:14.206 17:29:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:32:14.206 17:29:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:32:14.206 17:29:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:32:14.206 17:29:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:32:14.206 17:29:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:32:14.206 17:29:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:32:14.206 17:29:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:32:14.206 17:29:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:32:14.206 17:29:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:14.206 17:29:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:32:14.206 17:29:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:14.206 17:29:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:14.206 17:29:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:14.465 17:29:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:32:14.465 17:29:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:14.465 17:29:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:32:14.465 17:29:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:32:14.465 17:29:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:32:14.465 17:29:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:32:14.465 17:29:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:32:14.465 17:29:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:32:14.465 17:29:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:32:14.465 17:29:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:32:14.465 17:29:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:32:14.465 17:29:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:32:14.465 17:29:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:32:14.465 17:29:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:32:14.465 17:29:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:32:14.465 17:29:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:32:14.465 17:29:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:32:14.465 17:29:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:32:14.465 17:29:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:32:14.465 17:29:15 -- common/autotest_common.sh@1572 -- # return 0 00:32:14.465 17:29:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:32:14.465 17:29:15 -- common/autotest_common.sh@1580 -- # return 0 00:32:14.465 17:29:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:32:14.465 17:29:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:32:14.465 17:29:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:32:14.465 17:29:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:32:14.465 17:29:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:32:14.465 17:29:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.465 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:32:14.465 17:29:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:32:14.465 17:29:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:32:14.465 17:29:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.465 17:29:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.466 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:32:14.466 ************************************ 00:32:14.466 START TEST env 00:32:14.466 ************************************ 00:32:14.466 17:29:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:32:14.724 * Looking for test storage... 00:32:14.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.724 17:29:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.724 17:29:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.724 17:29:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.724 17:29:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.724 17:29:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.724 17:29:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.724 17:29:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.724 17:29:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.724 17:29:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.724 17:29:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.724 17:29:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.724 17:29:15 env -- scripts/common.sh@344 -- # case "$op" in 00:32:14.724 17:29:15 env -- scripts/common.sh@345 -- # : 1 00:32:14.724 17:29:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.724 17:29:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.724 17:29:15 env -- scripts/common.sh@365 -- # decimal 1 00:32:14.724 17:29:15 env -- scripts/common.sh@353 -- # local d=1 00:32:14.724 17:29:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.724 17:29:15 env -- scripts/common.sh@355 -- # echo 1 00:32:14.724 17:29:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.724 17:29:15 env -- scripts/common.sh@366 -- # decimal 2 00:32:14.724 17:29:15 env -- scripts/common.sh@353 -- # local d=2 00:32:14.724 17:29:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.724 17:29:15 env -- scripts/common.sh@355 -- # echo 2 00:32:14.724 17:29:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.724 17:29:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.724 17:29:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.724 17:29:15 env -- scripts/common.sh@368 -- # return 0 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.724 --rc genhtml_branch_coverage=1 00:32:14.724 --rc genhtml_function_coverage=1 00:32:14.724 --rc genhtml_legend=1 00:32:14.724 --rc geninfo_all_blocks=1 00:32:14.724 --rc geninfo_unexecuted_blocks=1 00:32:14.724 00:32:14.724 ' 00:32:14.724 17:29:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.724 --rc genhtml_branch_coverage=1 00:32:14.724 --rc genhtml_function_coverage=1 00:32:14.724 --rc genhtml_legend=1 00:32:14.724 --rc geninfo_all_blocks=1 00:32:14.725 --rc geninfo_unexecuted_blocks=1 00:32:14.725 00:32:14.725 ' 00:32:14.725 17:29:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.725 --rc genhtml_branch_coverage=1 00:32:14.725 --rc genhtml_function_coverage=1 00:32:14.725 --rc genhtml_legend=1 00:32:14.725 --rc geninfo_all_blocks=1 00:32:14.725 --rc geninfo_unexecuted_blocks=1 00:32:14.725 00:32:14.725 ' 00:32:14.725 17:29:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.725 --rc genhtml_branch_coverage=1 00:32:14.725 --rc genhtml_function_coverage=1 00:32:14.725 --rc genhtml_legend=1 00:32:14.725 --rc geninfo_all_blocks=1 00:32:14.725 --rc geninfo_unexecuted_blocks=1 00:32:14.725 00:32:14.725 ' 00:32:14.725 17:29:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:32:14.725 17:29:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.725 17:29:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.725 17:29:15 env -- common/autotest_common.sh@10 -- # set +x 00:32:14.725 ************************************ 00:32:14.725 START TEST env_memory 00:32:14.725 ************************************ 00:32:14.725 17:29:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:32:14.725 00:32:14.725 00:32:14.725 CUnit - A unit testing framework for C - Version 2.1-3 00:32:14.725 http://cunit.sourceforge.net/ 00:32:14.725 00:32:14.725 00:32:14.725 Suite: memory 00:32:14.725 Test: alloc and free memory map ...[2024-11-26 17:29:15.393597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:32:14.984 passed 00:32:14.984 Test: mem map translation ...[2024-11-26 17:29:15.467705] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:32:14.984 [2024-11-26 17:29:15.467925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:32:14.984 [2024-11-26 17:29:15.468009] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:32:14.984 [2024-11-26 17:29:15.468037] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:32:14.984 passed 00:32:14.984 Test: mem map registration ...[2024-11-26 17:29:15.547314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:32:14.984 [2024-11-26 17:29:15.547396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:32:14.984 passed 00:32:14.984 Test: mem map adjacent registrations ...passed 00:32:14.984 00:32:14.984 Run Summary: Type Total Ran Passed Failed Inactive 00:32:14.984 suites 1 1 n/a 0 0 00:32:14.984 tests 4 4 4 0 0 00:32:14.984 asserts 152 152 152 0 n/a 00:32:14.984 00:32:14.984 Elapsed time = 0.303 seconds 00:32:14.984 00:32:14.984 real 0m0.373s 00:32:14.984 user 0m0.320s 00:32:14.984 sys 0m0.038s 00:32:14.984 17:29:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.984 ************************************ 00:32:14.984 END TEST env_memory 00:32:14.984 ************************************ 00:32:14.984 17:29:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:32:15.243 17:29:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:32:15.243 17:29:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.243 17:29:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.243 17:29:15 env -- common/autotest_common.sh@10 -- # set +x 00:32:15.243 ************************************ 00:32:15.243 START TEST env_vtophys 00:32:15.243 ************************************ 00:32:15.243 17:29:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:32:15.243 EAL: lib.eal log level changed from notice to debug 00:32:15.243 EAL: Detected lcore 0 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 1 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 2 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 3 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 4 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 5 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 6 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 7 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 8 as core 0 on socket 0 00:32:15.243 EAL: Detected lcore 9 as core 0 on socket 0 00:32:15.243 EAL: Maximum logical cores by configuration: 128 00:32:15.243 EAL: Detected CPU lcores: 10 00:32:15.243 EAL: Detected NUMA nodes: 1 00:32:15.243 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:32:15.243 EAL: Detected shared linkage of DPDK 00:32:15.243 EAL: No shared files mode enabled, IPC will be disabled 00:32:15.243 EAL: Selected IOVA mode 'PA' 00:32:15.243 EAL: Probing VFIO support... 00:32:15.243 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:32:15.243 EAL: VFIO modules not loaded, skipping VFIO support... 00:32:15.243 EAL: Ask a virtual area of 0x2e000 bytes 00:32:15.243 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:32:15.243 EAL: Setting up physically contiguous memory... 00:32:15.243 EAL: Setting maximum number of open files to 524288 00:32:15.243 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:32:15.243 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:32:15.243 EAL: Ask a virtual area of 0x61000 bytes 00:32:15.243 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:32:15.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:15.243 EAL: Ask a virtual area of 0x400000000 bytes 00:32:15.243 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:32:15.243 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:32:15.243 EAL: Ask a virtual area of 0x61000 bytes 00:32:15.243 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:32:15.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:15.243 EAL: Ask a virtual area of 0x400000000 bytes 00:32:15.243 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:32:15.243 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:32:15.243 EAL: Ask a virtual area of 0x61000 bytes 00:32:15.243 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:32:15.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:15.243 EAL: Ask a virtual area of 0x400000000 bytes 00:32:15.243 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:32:15.243 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:32:15.243 EAL: Ask a virtual area of 0x61000 bytes 00:32:15.243 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:32:15.243 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:32:15.243 EAL: Ask a virtual area of 0x400000000 bytes 00:32:15.243 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:32:15.243 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:32:15.243 EAL: Hugepages will be freed exactly as allocated. 00:32:15.243 EAL: No shared files mode enabled, IPC is disabled 00:32:15.243 EAL: No shared files mode enabled, IPC is disabled 00:32:15.243 EAL: TSC frequency is ~2490000 KHz 00:32:15.243 EAL: Main lcore 0 is ready (tid=7faff0c16a40;cpuset=[0]) 00:32:15.243 EAL: Trying to obtain current memory policy. 00:32:15.243 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:15.243 EAL: Restoring previous memory policy: 0 00:32:15.503 EAL: request: mp_malloc_sync 00:32:15.503 EAL: No shared files mode enabled, IPC is disabled 00:32:15.503 EAL: Heap on socket 0 was expanded by 2MB 00:32:15.503 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:32:15.503 EAL: No PCI address specified using 'addr=' in: bus=pci 00:32:15.503 EAL: Mem event callback 'spdk:(nil)' registered 00:32:15.503 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:32:15.503 00:32:15.503 00:32:15.503 CUnit - A unit testing framework for C - Version 2.1-3 00:32:15.503 http://cunit.sourceforge.net/ 00:32:15.503 00:32:15.503 00:32:15.503 Suite: components_suite 00:32:15.761 Test: vtophys_malloc_test ...passed 00:32:15.761 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:32:15.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:15.761 EAL: Restoring previous memory policy: 4 00:32:15.761 EAL: Calling mem event callback 'spdk:(nil)' 00:32:15.761 EAL: request: mp_malloc_sync 00:32:15.761 EAL: No shared files mode enabled, IPC is disabled 00:32:15.761 EAL: Heap on socket 0 was expanded by 4MB 00:32:16.034 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was shrunk by 4MB 00:32:16.035 EAL: Trying to obtain current memory policy. 00:32:16.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.035 EAL: Restoring previous memory policy: 4 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was expanded by 6MB 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was shrunk by 6MB 00:32:16.035 EAL: Trying to obtain current memory policy. 00:32:16.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.035 EAL: Restoring previous memory policy: 4 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was expanded by 10MB 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was shrunk by 10MB 00:32:16.035 EAL: Trying to obtain current memory policy. 00:32:16.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.035 EAL: Restoring previous memory policy: 4 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was expanded by 18MB 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was shrunk by 18MB 00:32:16.035 EAL: Trying to obtain current memory policy. 00:32:16.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.035 EAL: Restoring previous memory policy: 4 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was expanded by 34MB 00:32:16.035 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.035 EAL: request: mp_malloc_sync 00:32:16.035 EAL: No shared files mode enabled, IPC is disabled 00:32:16.035 EAL: Heap on socket 0 was shrunk by 34MB 00:32:16.321 EAL: Trying to obtain current memory policy. 00:32:16.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.321 EAL: Restoring previous memory policy: 4 00:32:16.321 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.321 EAL: request: mp_malloc_sync 00:32:16.321 EAL: No shared files mode enabled, IPC is disabled 00:32:16.321 EAL: Heap on socket 0 was expanded by 66MB 00:32:16.321 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.321 EAL: request: mp_malloc_sync 00:32:16.321 EAL: No shared files mode enabled, IPC is disabled 00:32:16.321 EAL: Heap on socket 0 was shrunk by 66MB 00:32:16.321 EAL: Trying to obtain current memory policy. 00:32:16.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:16.583 EAL: Restoring previous memory policy: 4 00:32:16.583 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.583 EAL: request: mp_malloc_sync 00:32:16.583 EAL: No shared files mode enabled, IPC is disabled 00:32:16.583 EAL: Heap on socket 0 was expanded by 130MB 00:32:16.583 EAL: Calling mem event callback 'spdk:(nil)' 00:32:16.842 EAL: request: mp_malloc_sync 00:32:16.842 EAL: No shared files mode enabled, IPC is disabled 00:32:16.842 EAL: Heap on socket 0 was shrunk by 130MB 00:32:16.842 EAL: Trying to obtain current memory policy. 00:32:16.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:17.101 EAL: Restoring previous memory policy: 4 00:32:17.101 EAL: Calling mem event callback 'spdk:(nil)' 00:32:17.101 EAL: request: mp_malloc_sync 00:32:17.101 EAL: No shared files mode enabled, IPC is disabled 00:32:17.101 EAL: Heap on socket 0 was expanded by 258MB 00:32:17.667 EAL: Calling mem event callback 'spdk:(nil)' 00:32:17.667 EAL: request: mp_malloc_sync 00:32:17.667 EAL: No shared files mode enabled, IPC is disabled 00:32:17.667 EAL: Heap on socket 0 was shrunk by 258MB 00:32:17.925 EAL: Trying to obtain current memory policy. 00:32:17.925 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:18.183 EAL: Restoring previous memory policy: 4 00:32:18.183 EAL: Calling mem event callback 'spdk:(nil)' 00:32:18.183 EAL: request: mp_malloc_sync 00:32:18.183 EAL: No shared files mode enabled, IPC is disabled 00:32:18.183 EAL: Heap on socket 0 was expanded by 514MB 00:32:19.117 EAL: Calling mem event callback 'spdk:(nil)' 00:32:19.117 EAL: request: mp_malloc_sync 00:32:19.117 EAL: No shared files mode enabled, IPC is disabled 00:32:19.117 EAL: Heap on socket 0 was shrunk by 514MB 00:32:20.053 EAL: Trying to obtain current memory policy. 00:32:20.053 EAL: Setting policy MPOL_PREFERRED for socket 0 00:32:20.312 EAL: Restoring previous memory policy: 4 00:32:20.312 EAL: Calling mem event callback 'spdk:(nil)' 00:32:20.312 EAL: request: mp_malloc_sync 00:32:20.312 EAL: No shared files mode enabled, IPC is disabled 00:32:20.312 EAL: Heap on socket 0 was expanded by 1026MB 00:32:22.215 EAL: Calling mem event callback 'spdk:(nil)' 00:32:22.474 EAL: request: mp_malloc_sync 00:32:22.474 EAL: No shared files mode enabled, IPC is disabled 00:32:22.474 EAL: Heap on socket 0 was shrunk by 1026MB 00:32:24.376 passed 00:32:24.376 00:32:24.376 Run Summary: Type Total Ran Passed Failed Inactive 00:32:24.376 suites 1 1 n/a 0 0 00:32:24.376 tests 2 2 2 0 0 00:32:24.376 asserts 5761 5761 5761 0 n/a 00:32:24.376 00:32:24.376 Elapsed time = 8.944 seconds 00:32:24.376 EAL: Calling mem event callback 'spdk:(nil)' 00:32:24.376 EAL: request: mp_malloc_sync 00:32:24.376 EAL: No shared files mode enabled, IPC is disabled 00:32:24.376 EAL: Heap on socket 0 was shrunk by 2MB 00:32:24.376 EAL: No shared files mode enabled, IPC is disabled 00:32:24.376 EAL: No shared files mode enabled, IPC is disabled 00:32:24.376 EAL: No shared files mode enabled, IPC is disabled 00:32:24.376 00:32:24.376 real 0m9.310s 00:32:24.376 user 0m8.103s 00:32:24.376 sys 0m1.031s 00:32:24.376 17:29:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.376 ************************************ 00:32:24.376 END TEST env_vtophys 00:32:24.376 ************************************ 00:32:24.376 17:29:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:32:24.634 17:29:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:32:24.634 17:29:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:24.634 17:29:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.634 17:29:25 env -- common/autotest_common.sh@10 -- # set +x 00:32:24.634 ************************************ 00:32:24.634 START TEST env_pci 00:32:24.634 ************************************ 00:32:24.634 17:29:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:32:24.634 00:32:24.634 00:32:24.634 CUnit - A unit testing framework for C - Version 2.1-3 00:32:24.634 http://cunit.sourceforge.net/ 00:32:24.634 00:32:24.634 00:32:24.634 Suite: pci 00:32:24.634 Test: pci_hook ...[2024-11-26 17:29:25.171600] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57666 has claimed it 00:32:24.634 EAL: Cannot find device (10000:00:01.0) 00:32:24.634 EAL: Failed to attach device on primary process 00:32:24.634 passed 00:32:24.634 00:32:24.634 Run Summary: Type Total Ran Passed Failed Inactive 00:32:24.634 suites 1 1 n/a 0 0 00:32:24.634 tests 1 1 1 0 0 00:32:24.634 asserts 25 25 25 0 n/a 00:32:24.634 00:32:24.634 Elapsed time = 0.015 seconds 00:32:24.634 00:32:24.634 real 0m0.131s 00:32:24.634 user 0m0.058s 00:32:24.634 sys 0m0.070s 00:32:24.634 17:29:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:24.634 17:29:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:32:24.634 ************************************ 00:32:24.634 END TEST env_pci 00:32:24.634 ************************************ 00:32:24.634 17:29:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:32:24.634 17:29:25 env -- env/env.sh@15 -- # uname 00:32:24.634 17:29:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:32:24.634 17:29:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:32:24.634 17:29:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:32:24.634 17:29:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:24.635 17:29:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:24.635 17:29:25 env -- common/autotest_common.sh@10 -- # set +x 00:32:24.894 ************************************ 00:32:24.894 START TEST env_dpdk_post_init 00:32:24.894 ************************************ 00:32:24.894 17:29:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:32:24.894 EAL: Detected CPU lcores: 10 00:32:24.894 EAL: Detected NUMA nodes: 1 00:32:24.894 EAL: Detected shared linkage of DPDK 00:32:24.894 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:32:24.894 EAL: Selected IOVA mode 'PA' 00:32:24.894 TELEMETRY: No legacy callbacks, legacy socket not created 00:32:25.153 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:32:25.153 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:32:25.153 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:32:25.153 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:32:25.153 Starting DPDK initialization... 00:32:25.153 Starting SPDK post initialization... 00:32:25.153 SPDK NVMe probe 00:32:25.153 Attaching to 0000:00:10.0 00:32:25.153 Attaching to 0000:00:11.0 00:32:25.153 Attaching to 0000:00:12.0 00:32:25.153 Attaching to 0000:00:13.0 00:32:25.153 Attached to 0000:00:10.0 00:32:25.153 Attached to 0000:00:11.0 00:32:25.153 Attached to 0000:00:13.0 00:32:25.153 Attached to 0000:00:12.0 00:32:25.153 Cleaning up... 00:32:25.153 00:32:25.153 real 0m0.328s 00:32:25.153 user 0m0.120s 00:32:25.153 sys 0m0.112s 00:32:25.153 ************************************ 00:32:25.153 END TEST env_dpdk_post_init 00:32:25.153 ************************************ 00:32:25.153 17:29:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.153 17:29:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:32:25.153 17:29:25 env -- env/env.sh@26 -- # uname 00:32:25.153 17:29:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:32:25.153 17:29:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:32:25.153 17:29:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.153 17:29:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.153 17:29:25 env -- common/autotest_common.sh@10 -- # set +x 00:32:25.153 ************************************ 00:32:25.153 START TEST env_mem_callbacks 00:32:25.153 ************************************ 00:32:25.153 17:29:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:32:25.153 EAL: Detected CPU lcores: 10 00:32:25.153 EAL: Detected NUMA nodes: 1 00:32:25.153 EAL: Detected shared linkage of DPDK 00:32:25.153 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:32:25.153 EAL: Selected IOVA mode 'PA' 00:32:25.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:32:25.411 00:32:25.411 00:32:25.411 CUnit - A unit testing framework for C - Version 2.1-3 00:32:25.411 http://cunit.sourceforge.net/ 00:32:25.411 00:32:25.411 00:32:25.411 Suite: memory 00:32:25.411 Test: test ... 00:32:25.411 register 0x200000200000 2097152 00:32:25.411 malloc 3145728 00:32:25.411 register 0x200000400000 4194304 00:32:25.411 buf 0x2000004fffc0 len 3145728 PASSED 00:32:25.411 malloc 64 00:32:25.411 buf 0x2000004ffec0 len 64 PASSED 00:32:25.411 malloc 4194304 00:32:25.411 register 0x200000800000 6291456 00:32:25.411 buf 0x2000009fffc0 len 4194304 PASSED 00:32:25.411 free 0x2000004fffc0 3145728 00:32:25.411 free 0x2000004ffec0 64 00:32:25.411 unregister 0x200000400000 4194304 PASSED 00:32:25.411 free 0x2000009fffc0 4194304 00:32:25.411 unregister 0x200000800000 6291456 PASSED 00:32:25.411 malloc 8388608 00:32:25.411 register 0x200000400000 10485760 00:32:25.411 buf 0x2000005fffc0 len 8388608 PASSED 00:32:25.411 free 0x2000005fffc0 8388608 00:32:25.411 unregister 0x200000400000 10485760 PASSED 00:32:25.411 passed 00:32:25.411 00:32:25.411 Run Summary: Type Total Ran Passed Failed Inactive 00:32:25.411 suites 1 1 n/a 0 0 00:32:25.411 tests 1 1 1 0 0 00:32:25.411 asserts 15 15 15 0 n/a 00:32:25.412 00:32:25.412 Elapsed time = 0.088 seconds 00:32:25.412 ************************************ 00:32:25.412 END TEST env_mem_callbacks 00:32:25.412 ************************************ 00:32:25.412 00:32:25.412 real 0m0.306s 00:32:25.412 user 0m0.119s 00:32:25.412 sys 0m0.082s 00:32:25.412 17:29:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.412 17:29:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:32:25.670 ************************************ 00:32:25.671 END TEST env 00:32:25.671 ************************************ 00:32:25.671 00:32:25.671 real 0m11.082s 00:32:25.671 user 0m8.977s 00:32:25.671 sys 0m1.709s 00:32:25.671 17:29:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.671 17:29:26 env -- common/autotest_common.sh@10 -- # set +x 00:32:25.671 17:29:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:32:25.671 17:29:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.671 17:29:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.671 17:29:26 -- common/autotest_common.sh@10 -- # set +x 00:32:25.671 ************************************ 00:32:25.671 START TEST rpc 00:32:25.671 ************************************ 00:32:25.671 17:29:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:32:25.671 * Looking for test storage... 00:32:25.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:32:25.671 17:29:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.671 17:29:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.671 17:29:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.929 17:29:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.929 17:29:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.929 17:29:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.929 17:29:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.929 17:29:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.929 17:29:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:25.929 17:29:26 rpc -- scripts/common.sh@345 -- # : 1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.929 17:29:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.929 17:29:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@353 -- # local d=1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.929 17:29:26 rpc -- scripts/common.sh@355 -- # echo 1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.929 17:29:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@353 -- # local d=2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.929 17:29:26 rpc -- scripts/common.sh@355 -- # echo 2 00:32:25.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.929 17:29:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.929 17:29:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.929 17:29:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.929 17:29:26 rpc -- scripts/common.sh@368 -- # return 0 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.929 --rc genhtml_branch_coverage=1 00:32:25.929 --rc genhtml_function_coverage=1 00:32:25.929 --rc genhtml_legend=1 00:32:25.929 --rc geninfo_all_blocks=1 00:32:25.929 --rc geninfo_unexecuted_blocks=1 00:32:25.929 00:32:25.929 ' 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.929 --rc genhtml_branch_coverage=1 00:32:25.929 --rc genhtml_function_coverage=1 00:32:25.929 --rc genhtml_legend=1 00:32:25.929 --rc geninfo_all_blocks=1 00:32:25.929 --rc geninfo_unexecuted_blocks=1 00:32:25.929 00:32:25.929 ' 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.929 --rc genhtml_branch_coverage=1 00:32:25.929 --rc genhtml_function_coverage=1 00:32:25.929 --rc genhtml_legend=1 00:32:25.929 --rc geninfo_all_blocks=1 00:32:25.929 --rc geninfo_unexecuted_blocks=1 00:32:25.929 00:32:25.929 ' 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.929 --rc genhtml_branch_coverage=1 00:32:25.929 --rc genhtml_function_coverage=1 00:32:25.929 --rc genhtml_legend=1 00:32:25.929 --rc geninfo_all_blocks=1 00:32:25.929 --rc geninfo_unexecuted_blocks=1 00:32:25.929 00:32:25.929 ' 00:32:25.929 17:29:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57798 00:32:25.929 17:29:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:25.929 17:29:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57798 00:32:25.929 17:29:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 57798 ']' 00:32:25.930 17:29:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.930 17:29:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.930 17:29:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.930 17:29:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:32:25.930 17:29:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.930 17:29:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.930 [2024-11-26 17:29:26.557789] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:25.930 [2024-11-26 17:29:26.558109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57798 ] 00:32:26.189 [2024-11-26 17:29:26.747535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.189 [2024-11-26 17:29:26.867429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:32:26.189 [2024-11-26 17:29:26.867508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57798' to capture a snapshot of events at runtime. 00:32:26.189 [2024-11-26 17:29:26.867524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:26.189 [2024-11-26 17:29:26.867537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:26.189 [2024-11-26 17:29:26.867548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57798 for offline analysis/debug. 00:32:26.189 [2024-11-26 17:29:26.868857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.126 17:29:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.126 17:29:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:32:27.126 17:29:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:32:27.126 17:29:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:32:27.126 17:29:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:32:27.126 17:29:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:32:27.126 17:29:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.126 17:29:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.126 17:29:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.126 ************************************ 00:32:27.126 START TEST rpc_integrity 00:32:27.126 ************************************ 00:32:27.126 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:32:27.126 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:32:27.126 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.126 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.385 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.385 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:32:27.385 { 00:32:27.385 "name": "Malloc0", 00:32:27.385 "aliases": [ 00:32:27.385 "8d5aea29-faf7-4a9c-bc63-ca407e54ac09" 00:32:27.385 ], 00:32:27.385 "product_name": "Malloc disk", 00:32:27.385 "block_size": 512, 00:32:27.385 "num_blocks": 16384, 00:32:27.385 "uuid": "8d5aea29-faf7-4a9c-bc63-ca407e54ac09", 00:32:27.385 "assigned_rate_limits": { 00:32:27.385 "rw_ios_per_sec": 0, 00:32:27.385 "rw_mbytes_per_sec": 0, 00:32:27.385 "r_mbytes_per_sec": 0, 00:32:27.385 "w_mbytes_per_sec": 0 00:32:27.385 }, 00:32:27.385 "claimed": false, 00:32:27.385 "zoned": false, 00:32:27.385 "supported_io_types": { 00:32:27.385 "read": true, 00:32:27.385 "write": true, 00:32:27.385 "unmap": true, 00:32:27.385 "flush": true, 00:32:27.385 "reset": true, 00:32:27.385 "nvme_admin": false, 00:32:27.385 "nvme_io": false, 00:32:27.385 "nvme_io_md": false, 00:32:27.385 "write_zeroes": true, 00:32:27.385 "zcopy": true, 00:32:27.385 "get_zone_info": false, 00:32:27.385 "zone_management": false, 00:32:27.385 "zone_append": false, 00:32:27.385 "compare": false, 00:32:27.385 "compare_and_write": false, 00:32:27.385 "abort": true, 00:32:27.385 "seek_hole": false, 00:32:27.385 "seek_data": false, 00:32:27.385 "copy": true, 00:32:27.385 "nvme_iov_md": false 00:32:27.386 }, 00:32:27.386 "memory_domains": [ 00:32:27.386 { 00:32:27.386 "dma_device_id": "system", 00:32:27.386 "dma_device_type": 1 00:32:27.386 }, 00:32:27.386 { 00:32:27.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.386 "dma_device_type": 2 00:32:27.386 } 00:32:27.386 ], 00:32:27.386 "driver_specific": {} 00:32:27.386 } 00:32:27.386 ]' 00:32:27.386 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:32:27.386 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:32:27.386 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:32:27.386 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.386 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 [2024-11-26 17:29:27.972739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:32:27.386 [2024-11-26 17:29:27.972823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.386 [2024-11-26 17:29:27.972862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:27.386 [2024-11-26 17:29:27.972880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.386 [2024-11-26 17:29:27.975521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.386 [2024-11-26 17:29:27.975567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:32:27.386 Passthru0 00:32:27.386 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.386 17:29:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:32:27.386 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.386 17:29:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.386 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:32:27.386 { 00:32:27.386 "name": "Malloc0", 00:32:27.386 "aliases": [ 00:32:27.386 "8d5aea29-faf7-4a9c-bc63-ca407e54ac09" 00:32:27.386 ], 00:32:27.386 "product_name": "Malloc disk", 00:32:27.386 "block_size": 512, 00:32:27.386 "num_blocks": 16384, 00:32:27.386 "uuid": "8d5aea29-faf7-4a9c-bc63-ca407e54ac09", 00:32:27.386 "assigned_rate_limits": { 00:32:27.386 "rw_ios_per_sec": 0, 00:32:27.386 "rw_mbytes_per_sec": 0, 00:32:27.386 "r_mbytes_per_sec": 0, 00:32:27.386 "w_mbytes_per_sec": 0 00:32:27.386 }, 00:32:27.386 "claimed": true, 00:32:27.386 "claim_type": "exclusive_write", 00:32:27.386 "zoned": false, 00:32:27.386 "supported_io_types": { 00:32:27.386 "read": true, 00:32:27.386 "write": true, 00:32:27.386 "unmap": true, 00:32:27.386 "flush": true, 00:32:27.386 "reset": true, 00:32:27.386 "nvme_admin": false, 00:32:27.386 "nvme_io": false, 00:32:27.386 "nvme_io_md": false, 00:32:27.386 "write_zeroes": true, 00:32:27.386 "zcopy": true, 00:32:27.386 "get_zone_info": false, 00:32:27.386 "zone_management": false, 00:32:27.386 "zone_append": false, 00:32:27.386 "compare": false, 00:32:27.386 "compare_and_write": false, 00:32:27.386 "abort": true, 00:32:27.386 "seek_hole": false, 00:32:27.386 "seek_data": false, 00:32:27.386 "copy": true, 00:32:27.386 "nvme_iov_md": false 00:32:27.386 }, 00:32:27.386 "memory_domains": [ 00:32:27.386 { 00:32:27.386 "dma_device_id": "system", 00:32:27.386 "dma_device_type": 1 00:32:27.386 }, 00:32:27.386 { 00:32:27.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.386 "dma_device_type": 2 00:32:27.386 } 00:32:27.386 ], 00:32:27.386 "driver_specific": {} 00:32:27.386 }, 00:32:27.386 { 00:32:27.386 "name": "Passthru0", 00:32:27.386 "aliases": [ 00:32:27.386 "e7d61f59-5bfa-5a15-8440-a5d1586e4fff" 00:32:27.386 ], 00:32:27.386 "product_name": "passthru", 00:32:27.386 "block_size": 512, 00:32:27.386 "num_blocks": 16384, 00:32:27.386 "uuid": "e7d61f59-5bfa-5a15-8440-a5d1586e4fff", 00:32:27.386 "assigned_rate_limits": { 00:32:27.386 "rw_ios_per_sec": 0, 00:32:27.386 "rw_mbytes_per_sec": 0, 00:32:27.386 "r_mbytes_per_sec": 0, 00:32:27.386 "w_mbytes_per_sec": 0 00:32:27.386 }, 00:32:27.386 "claimed": false, 00:32:27.386 "zoned": false, 00:32:27.386 "supported_io_types": { 00:32:27.386 "read": true, 00:32:27.386 "write": true, 00:32:27.386 "unmap": true, 00:32:27.386 "flush": true, 00:32:27.386 "reset": true, 00:32:27.386 "nvme_admin": false, 00:32:27.386 "nvme_io": false, 00:32:27.386 "nvme_io_md": false, 00:32:27.386 "write_zeroes": true, 00:32:27.386 "zcopy": true, 00:32:27.386 "get_zone_info": false, 00:32:27.386 "zone_management": false, 00:32:27.386 "zone_append": false, 00:32:27.386 "compare": false, 00:32:27.386 "compare_and_write": false, 00:32:27.386 "abort": true, 00:32:27.386 "seek_hole": false, 00:32:27.386 "seek_data": false, 00:32:27.386 "copy": true, 00:32:27.386 "nvme_iov_md": false 00:32:27.386 }, 00:32:27.386 "memory_domains": [ 00:32:27.386 { 00:32:27.386 "dma_device_id": "system", 00:32:27.386 "dma_device_type": 1 00:32:27.386 }, 00:32:27.386 { 00:32:27.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.386 "dma_device_type": 2 00:32:27.386 } 00:32:27.386 ], 00:32:27.386 "driver_specific": { 00:32:27.386 "passthru": { 00:32:27.386 "name": "Passthru0", 00:32:27.386 "base_bdev_name": "Malloc0" 00:32:27.386 } 00:32:27.386 } 00:32:27.386 } 00:32:27.386 ]' 00:32:27.386 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:32:27.386 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:32:27.386 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.386 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.386 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.645 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.645 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:32:27.645 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:32:27.645 17:29:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:32:27.645 ************************************ 00:32:27.645 END TEST rpc_integrity 00:32:27.645 ************************************ 00:32:27.645 00:32:27.645 real 0m0.349s 00:32:27.645 user 0m0.174s 00:32:27.645 sys 0m0.066s 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:32:27.645 17:29:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.645 17:29:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.645 17:29:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 ************************************ 00:32:27.645 START TEST rpc_plugins 00:32:27.645 ************************************ 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:32:27.645 { 00:32:27.645 "name": "Malloc1", 00:32:27.645 "aliases": [ 00:32:27.645 "fecca3ca-4b91-4ab2-b172-dd242ccc4710" 00:32:27.645 ], 00:32:27.645 "product_name": "Malloc disk", 00:32:27.645 "block_size": 4096, 00:32:27.645 "num_blocks": 256, 00:32:27.645 "uuid": "fecca3ca-4b91-4ab2-b172-dd242ccc4710", 00:32:27.645 "assigned_rate_limits": { 00:32:27.645 "rw_ios_per_sec": 0, 00:32:27.645 "rw_mbytes_per_sec": 0, 00:32:27.645 "r_mbytes_per_sec": 0, 00:32:27.645 "w_mbytes_per_sec": 0 00:32:27.645 }, 00:32:27.645 "claimed": false, 00:32:27.645 "zoned": false, 00:32:27.645 "supported_io_types": { 00:32:27.645 "read": true, 00:32:27.645 "write": true, 00:32:27.645 "unmap": true, 00:32:27.645 "flush": true, 00:32:27.645 "reset": true, 00:32:27.645 "nvme_admin": false, 00:32:27.645 "nvme_io": false, 00:32:27.645 "nvme_io_md": false, 00:32:27.645 "write_zeroes": true, 00:32:27.645 "zcopy": true, 00:32:27.645 "get_zone_info": false, 00:32:27.645 "zone_management": false, 00:32:27.645 "zone_append": false, 00:32:27.645 "compare": false, 00:32:27.645 "compare_and_write": false, 00:32:27.645 "abort": true, 00:32:27.645 "seek_hole": false, 00:32:27.645 "seek_data": false, 00:32:27.645 "copy": true, 00:32:27.645 "nvme_iov_md": false 00:32:27.645 }, 00:32:27.645 "memory_domains": [ 00:32:27.645 { 00:32:27.645 "dma_device_id": "system", 00:32:27.645 "dma_device_type": 1 00:32:27.645 }, 00:32:27.645 { 00:32:27.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.645 "dma_device_type": 2 00:32:27.645 } 00:32:27.645 ], 00:32:27.645 "driver_specific": {} 00:32:27.645 } 00:32:27.645 ]' 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.645 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.645 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:27.904 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.904 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:32:27.904 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:32:27.904 ************************************ 00:32:27.904 END TEST rpc_plugins 00:32:27.904 ************************************ 00:32:27.904 17:29:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:32:27.904 00:32:27.904 real 0m0.172s 00:32:27.904 user 0m0.098s 00:32:27.904 sys 0m0.030s 00:32:27.904 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.904 17:29:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:32:27.904 17:29:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:32:27.904 17:29:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.904 17:29:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.904 17:29:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.904 ************************************ 00:32:27.904 START TEST rpc_trace_cmd_test 00:32:27.904 ************************************ 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.904 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:32:27.904 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57798", 00:32:27.904 "tpoint_group_mask": "0x8", 00:32:27.904 "iscsi_conn": { 00:32:27.904 "mask": "0x2", 00:32:27.904 "tpoint_mask": "0x0" 00:32:27.904 }, 00:32:27.904 "scsi": { 00:32:27.904 "mask": "0x4", 00:32:27.904 "tpoint_mask": "0x0" 00:32:27.904 }, 00:32:27.904 "bdev": { 00:32:27.904 "mask": "0x8", 00:32:27.904 "tpoint_mask": "0xffffffffffffffff" 00:32:27.904 }, 00:32:27.904 "nvmf_rdma": { 00:32:27.904 "mask": "0x10", 00:32:27.904 "tpoint_mask": "0x0" 00:32:27.904 }, 00:32:27.904 "nvmf_tcp": { 00:32:27.904 "mask": "0x20", 00:32:27.904 "tpoint_mask": "0x0" 00:32:27.904 }, 00:32:27.904 "ftl": { 00:32:27.905 "mask": "0x40", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "blobfs": { 00:32:27.905 "mask": "0x80", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "dsa": { 00:32:27.905 "mask": "0x200", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "thread": { 00:32:27.905 "mask": "0x400", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "nvme_pcie": { 00:32:27.905 "mask": "0x800", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "iaa": { 00:32:27.905 "mask": "0x1000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "nvme_tcp": { 00:32:27.905 "mask": "0x2000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "bdev_nvme": { 00:32:27.905 "mask": "0x4000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "sock": { 00:32:27.905 "mask": "0x8000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "blob": { 00:32:27.905 "mask": "0x10000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "bdev_raid": { 00:32:27.905 "mask": "0x20000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 }, 00:32:27.905 "scheduler": { 00:32:27.905 "mask": "0x40000", 00:32:27.905 "tpoint_mask": "0x0" 00:32:27.905 } 00:32:27.905 }' 00:32:27.905 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:32:27.905 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:32:27.905 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:32:27.905 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:32:27.905 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:32:28.162 ************************************ 00:32:28.162 END TEST rpc_trace_cmd_test 00:32:28.162 ************************************ 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:32:28.162 00:32:28.162 real 0m0.260s 00:32:28.162 user 0m0.205s 00:32:28.162 sys 0m0.042s 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.162 17:29:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.162 17:29:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:32:28.162 17:29:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:32:28.162 17:29:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:32:28.162 17:29:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:28.162 17:29:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:28.162 17:29:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:28.162 ************************************ 00:32:28.162 START TEST rpc_daemon_integrity 00:32:28.162 ************************************ 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:32:28.162 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:32:28.422 { 00:32:28.422 "name": "Malloc2", 00:32:28.422 "aliases": [ 00:32:28.422 "26490af3-a37c-4f8b-b677-c940610980c3" 00:32:28.422 ], 00:32:28.422 "product_name": "Malloc disk", 00:32:28.422 "block_size": 512, 00:32:28.422 "num_blocks": 16384, 00:32:28.422 "uuid": "26490af3-a37c-4f8b-b677-c940610980c3", 00:32:28.422 "assigned_rate_limits": { 00:32:28.422 "rw_ios_per_sec": 0, 00:32:28.422 "rw_mbytes_per_sec": 0, 00:32:28.422 "r_mbytes_per_sec": 0, 00:32:28.422 "w_mbytes_per_sec": 0 00:32:28.422 }, 00:32:28.422 "claimed": false, 00:32:28.422 "zoned": false, 00:32:28.422 "supported_io_types": { 00:32:28.422 "read": true, 00:32:28.422 "write": true, 00:32:28.422 "unmap": true, 00:32:28.422 "flush": true, 00:32:28.422 "reset": true, 00:32:28.422 "nvme_admin": false, 00:32:28.422 "nvme_io": false, 00:32:28.422 "nvme_io_md": false, 00:32:28.422 "write_zeroes": true, 00:32:28.422 "zcopy": true, 00:32:28.422 "get_zone_info": false, 00:32:28.422 "zone_management": false, 00:32:28.422 "zone_append": false, 00:32:28.422 "compare": false, 00:32:28.422 "compare_and_write": false, 00:32:28.422 "abort": true, 00:32:28.422 "seek_hole": false, 00:32:28.422 "seek_data": false, 00:32:28.422 "copy": true, 00:32:28.422 "nvme_iov_md": false 00:32:28.422 }, 00:32:28.422 "memory_domains": [ 00:32:28.422 { 00:32:28.422 "dma_device_id": "system", 00:32:28.422 "dma_device_type": 1 00:32:28.422 }, 00:32:28.422 { 00:32:28.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.422 "dma_device_type": 2 00:32:28.422 } 00:32:28.422 ], 00:32:28.422 "driver_specific": {} 00:32:28.422 } 00:32:28.422 ]' 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 [2024-11-26 17:29:28.957338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:32:28.422 [2024-11-26 17:29:28.957415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:28.422 [2024-11-26 17:29:28.957441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:28.422 [2024-11-26 17:29:28.957466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:28.422 [2024-11-26 17:29:28.960079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:28.422 [2024-11-26 17:29:28.960130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:32:28.422 Passthru0 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:32:28.422 { 00:32:28.422 "name": "Malloc2", 00:32:28.422 "aliases": [ 00:32:28.422 "26490af3-a37c-4f8b-b677-c940610980c3" 00:32:28.422 ], 00:32:28.422 "product_name": "Malloc disk", 00:32:28.422 "block_size": 512, 00:32:28.422 "num_blocks": 16384, 00:32:28.422 "uuid": "26490af3-a37c-4f8b-b677-c940610980c3", 00:32:28.422 "assigned_rate_limits": { 00:32:28.422 "rw_ios_per_sec": 0, 00:32:28.422 "rw_mbytes_per_sec": 0, 00:32:28.422 "r_mbytes_per_sec": 0, 00:32:28.422 "w_mbytes_per_sec": 0 00:32:28.422 }, 00:32:28.422 "claimed": true, 00:32:28.422 "claim_type": "exclusive_write", 00:32:28.422 "zoned": false, 00:32:28.422 "supported_io_types": { 00:32:28.422 "read": true, 00:32:28.422 "write": true, 00:32:28.422 "unmap": true, 00:32:28.422 "flush": true, 00:32:28.422 "reset": true, 00:32:28.422 "nvme_admin": false, 00:32:28.422 "nvme_io": false, 00:32:28.422 "nvme_io_md": false, 00:32:28.422 "write_zeroes": true, 00:32:28.422 "zcopy": true, 00:32:28.422 "get_zone_info": false, 00:32:28.422 "zone_management": false, 00:32:28.422 "zone_append": false, 00:32:28.422 "compare": false, 00:32:28.422 "compare_and_write": false, 00:32:28.422 "abort": true, 00:32:28.422 "seek_hole": false, 00:32:28.422 "seek_data": false, 00:32:28.422 "copy": true, 00:32:28.422 "nvme_iov_md": false 00:32:28.422 }, 00:32:28.422 "memory_domains": [ 00:32:28.422 { 00:32:28.422 "dma_device_id": "system", 00:32:28.422 "dma_device_type": 1 00:32:28.422 }, 00:32:28.422 { 00:32:28.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.422 "dma_device_type": 2 00:32:28.422 } 00:32:28.422 ], 00:32:28.422 "driver_specific": {} 00:32:28.422 }, 00:32:28.422 { 00:32:28.422 "name": "Passthru0", 00:32:28.422 "aliases": [ 00:32:28.422 "652c9c98-9254-5105-b8f5-f7b6bf490fa1" 00:32:28.422 ], 00:32:28.422 "product_name": "passthru", 00:32:28.422 "block_size": 512, 00:32:28.422 "num_blocks": 16384, 00:32:28.422 "uuid": "652c9c98-9254-5105-b8f5-f7b6bf490fa1", 00:32:28.422 "assigned_rate_limits": { 00:32:28.422 "rw_ios_per_sec": 0, 00:32:28.422 "rw_mbytes_per_sec": 0, 00:32:28.422 "r_mbytes_per_sec": 0, 00:32:28.422 "w_mbytes_per_sec": 0 00:32:28.422 }, 00:32:28.422 "claimed": false, 00:32:28.422 "zoned": false, 00:32:28.422 "supported_io_types": { 00:32:28.422 "read": true, 00:32:28.422 "write": true, 00:32:28.422 "unmap": true, 00:32:28.422 "flush": true, 00:32:28.422 "reset": true, 00:32:28.422 "nvme_admin": false, 00:32:28.422 "nvme_io": false, 00:32:28.422 "nvme_io_md": false, 00:32:28.422 "write_zeroes": true, 00:32:28.422 "zcopy": true, 00:32:28.422 "get_zone_info": false, 00:32:28.422 "zone_management": false, 00:32:28.422 "zone_append": false, 00:32:28.422 "compare": false, 00:32:28.422 "compare_and_write": false, 00:32:28.422 "abort": true, 00:32:28.422 "seek_hole": false, 00:32:28.422 "seek_data": false, 00:32:28.422 "copy": true, 00:32:28.422 "nvme_iov_md": false 00:32:28.422 }, 00:32:28.422 "memory_domains": [ 00:32:28.422 { 00:32:28.422 "dma_device_id": "system", 00:32:28.422 "dma_device_type": 1 00:32:28.422 }, 00:32:28.422 { 00:32:28.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.422 "dma_device_type": 2 00:32:28.422 } 00:32:28.422 ], 00:32:28.422 "driver_specific": { 00:32:28.422 "passthru": { 00:32:28.422 "name": "Passthru0", 00:32:28.422 "base_bdev_name": "Malloc2" 00:32:28.422 } 00:32:28.422 } 00:32:28.422 } 00:32:28.422 ]' 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:32:28.422 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:32:28.681 ************************************ 00:32:28.681 END TEST rpc_daemon_integrity 00:32:28.681 ************************************ 00:32:28.681 17:29:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:32:28.681 00:32:28.681 real 0m0.349s 00:32:28.681 user 0m0.172s 00:32:28.681 sys 0m0.074s 00:32:28.681 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.681 17:29:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:32:28.681 17:29:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:28.681 17:29:29 rpc -- rpc/rpc.sh@84 -- # killprocess 57798 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 57798 ']' 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@958 -- # kill -0 57798 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@959 -- # uname 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57798 00:32:28.681 killing process with pid 57798 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57798' 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@973 -- # kill 57798 00:32:28.681 17:29:29 rpc -- common/autotest_common.sh@978 -- # wait 57798 00:32:31.216 00:32:31.216 real 0m5.564s 00:32:31.216 user 0m6.059s 00:32:31.216 sys 0m1.092s 00:32:31.216 ************************************ 00:32:31.216 END TEST rpc 00:32:31.216 ************************************ 00:32:31.216 17:29:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.216 17:29:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:32:31.216 17:29:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:32:31.216 17:29:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:31.216 17:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.216 17:29:31 -- common/autotest_common.sh@10 -- # set +x 00:32:31.216 ************************************ 00:32:31.216 START TEST skip_rpc 00:32:31.216 ************************************ 00:32:31.216 17:29:31 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:32:31.475 * Looking for test storage... 00:32:31.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:32:31.475 17:29:31 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:31.475 17:29:31 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:32:31.475 17:29:31 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:31.475 17:29:32 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:32:31.475 17:29:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.476 17:29:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.476 --rc genhtml_branch_coverage=1 00:32:31.476 --rc genhtml_function_coverage=1 00:32:31.476 --rc genhtml_legend=1 00:32:31.476 --rc geninfo_all_blocks=1 00:32:31.476 --rc geninfo_unexecuted_blocks=1 00:32:31.476 00:32:31.476 ' 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.476 --rc genhtml_branch_coverage=1 00:32:31.476 --rc genhtml_function_coverage=1 00:32:31.476 --rc genhtml_legend=1 00:32:31.476 --rc geninfo_all_blocks=1 00:32:31.476 --rc geninfo_unexecuted_blocks=1 00:32:31.476 00:32:31.476 ' 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.476 --rc genhtml_branch_coverage=1 00:32:31.476 --rc genhtml_function_coverage=1 00:32:31.476 --rc genhtml_legend=1 00:32:31.476 --rc geninfo_all_blocks=1 00:32:31.476 --rc geninfo_unexecuted_blocks=1 00:32:31.476 00:32:31.476 ' 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.476 --rc genhtml_branch_coverage=1 00:32:31.476 --rc genhtml_function_coverage=1 00:32:31.476 --rc genhtml_legend=1 00:32:31.476 --rc geninfo_all_blocks=1 00:32:31.476 --rc geninfo_unexecuted_blocks=1 00:32:31.476 00:32:31.476 ' 00:32:31.476 17:29:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:31.476 17:29:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:32:31.476 17:29:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.476 17:29:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:31.476 ************************************ 00:32:31.476 START TEST skip_rpc 00:32:31.476 ************************************ 00:32:31.476 17:29:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:32:31.476 17:29:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58033 00:32:31.476 17:29:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:32:31.476 17:29:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:31.476 17:29:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:32:31.734 [2024-11-26 17:29:32.211176] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:31.734 [2024-11-26 17:29:32.211509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:32:31.734 [2024-11-26 17:29:32.397552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.993 [2024-11-26 17:29:32.515727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58033 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58033 ']' 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58033 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58033 00:32:37.258 killing process with pid 58033 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58033' 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58033 00:32:37.258 17:29:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58033 00:32:39.792 00:32:39.792 real 0m7.907s 00:32:39.792 user 0m7.377s 00:32:39.792 sys 0m0.445s 00:32:39.792 17:29:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.792 17:29:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:39.792 ************************************ 00:32:39.792 END TEST skip_rpc 00:32:39.792 ************************************ 00:32:39.792 17:29:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:32:39.792 17:29:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:39.792 17:29:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.792 17:29:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:39.792 ************************************ 00:32:39.792 START TEST skip_rpc_with_json 00:32:39.792 ************************************ 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58147 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58147 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58147 ']' 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.792 17:29:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:39.792 [2024-11-26 17:29:40.184849] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:39.792 [2024-11-26 17:29:40.185000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58147 ] 00:32:39.792 [2024-11-26 17:29:40.374122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.050 [2024-11-26 17:29:40.498214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:40.985 [2024-11-26 17:29:41.454055] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:32:40.985 request: 00:32:40.985 { 00:32:40.985 "trtype": "tcp", 00:32:40.985 "method": "nvmf_get_transports", 00:32:40.985 "req_id": 1 00:32:40.985 } 00:32:40.985 Got JSON-RPC error response 00:32:40.985 response: 00:32:40.985 { 00:32:40.985 "code": -19, 00:32:40.985 "message": "No such device" 00:32:40.985 } 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:40.985 [2024-11-26 17:29:41.466127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.985 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:40.985 { 00:32:40.985 "subsystems": [ 00:32:40.985 { 00:32:40.985 "subsystem": "fsdev", 00:32:40.985 "config": [ 00:32:40.985 { 00:32:40.985 "method": "fsdev_set_opts", 00:32:40.985 "params": { 00:32:40.985 "fsdev_io_pool_size": 65535, 00:32:40.985 "fsdev_io_cache_size": 256 00:32:40.985 } 00:32:40.985 } 00:32:40.985 ] 00:32:40.985 }, 00:32:40.985 { 00:32:40.985 "subsystem": "keyring", 00:32:40.985 "config": [] 00:32:40.985 }, 00:32:40.985 { 00:32:40.985 "subsystem": "iobuf", 00:32:40.985 "config": [ 00:32:40.985 { 00:32:40.985 "method": "iobuf_set_options", 00:32:40.985 "params": { 00:32:40.985 "small_pool_count": 8192, 00:32:40.985 "large_pool_count": 1024, 00:32:40.985 "small_bufsize": 8192, 00:32:40.985 "large_bufsize": 135168, 00:32:40.985 "enable_numa": false 00:32:40.985 } 00:32:40.985 } 00:32:40.985 ] 00:32:40.985 }, 00:32:40.985 { 00:32:40.985 "subsystem": "sock", 00:32:40.985 "config": [ 00:32:40.985 { 00:32:40.985 "method": "sock_set_default_impl", 00:32:40.985 "params": { 00:32:40.985 "impl_name": "posix" 00:32:40.985 } 00:32:40.985 }, 00:32:40.985 { 00:32:40.985 "method": "sock_impl_set_options", 00:32:40.985 "params": { 00:32:40.985 "impl_name": "ssl", 00:32:40.985 "recv_buf_size": 4096, 00:32:40.985 "send_buf_size": 4096, 00:32:40.986 "enable_recv_pipe": true, 00:32:40.986 "enable_quickack": false, 00:32:40.986 "enable_placement_id": 0, 00:32:40.986 "enable_zerocopy_send_server": true, 00:32:40.986 "enable_zerocopy_send_client": false, 00:32:40.986 "zerocopy_threshold": 0, 00:32:40.986 "tls_version": 0, 00:32:40.986 "enable_ktls": false 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "sock_impl_set_options", 00:32:40.986 "params": { 00:32:40.986 "impl_name": "posix", 00:32:40.986 "recv_buf_size": 2097152, 00:32:40.986 "send_buf_size": 2097152, 00:32:40.986 "enable_recv_pipe": true, 00:32:40.986 "enable_quickack": false, 00:32:40.986 "enable_placement_id": 0, 00:32:40.986 "enable_zerocopy_send_server": true, 00:32:40.986 "enable_zerocopy_send_client": false, 00:32:40.986 "zerocopy_threshold": 0, 00:32:40.986 "tls_version": 0, 00:32:40.986 "enable_ktls": false 00:32:40.986 } 00:32:40.986 } 00:32:40.986 ] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "vmd", 00:32:40.986 "config": [] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "accel", 00:32:40.986 "config": [ 00:32:40.986 { 00:32:40.986 "method": "accel_set_options", 00:32:40.986 "params": { 00:32:40.986 "small_cache_size": 128, 00:32:40.986 "large_cache_size": 16, 00:32:40.986 "task_count": 2048, 00:32:40.986 "sequence_count": 2048, 00:32:40.986 "buf_count": 2048 00:32:40.986 } 00:32:40.986 } 00:32:40.986 ] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "bdev", 00:32:40.986 "config": [ 00:32:40.986 { 00:32:40.986 "method": "bdev_set_options", 00:32:40.986 "params": { 00:32:40.986 "bdev_io_pool_size": 65535, 00:32:40.986 "bdev_io_cache_size": 256, 00:32:40.986 "bdev_auto_examine": true, 00:32:40.986 "iobuf_small_cache_size": 128, 00:32:40.986 "iobuf_large_cache_size": 16 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "bdev_raid_set_options", 00:32:40.986 "params": { 00:32:40.986 "process_window_size_kb": 1024, 00:32:40.986 "process_max_bandwidth_mb_sec": 0 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "bdev_iscsi_set_options", 00:32:40.986 "params": { 00:32:40.986 "timeout_sec": 30 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "bdev_nvme_set_options", 00:32:40.986 "params": { 00:32:40.986 "action_on_timeout": "none", 00:32:40.986 "timeout_us": 0, 00:32:40.986 "timeout_admin_us": 0, 00:32:40.986 "keep_alive_timeout_ms": 10000, 00:32:40.986 "arbitration_burst": 0, 00:32:40.986 "low_priority_weight": 0, 00:32:40.986 "medium_priority_weight": 0, 00:32:40.986 "high_priority_weight": 0, 00:32:40.986 "nvme_adminq_poll_period_us": 10000, 00:32:40.986 "nvme_ioq_poll_period_us": 0, 00:32:40.986 "io_queue_requests": 0, 00:32:40.986 "delay_cmd_submit": true, 00:32:40.986 "transport_retry_count": 4, 00:32:40.986 "bdev_retry_count": 3, 00:32:40.986 "transport_ack_timeout": 0, 00:32:40.986 "ctrlr_loss_timeout_sec": 0, 00:32:40.986 "reconnect_delay_sec": 0, 00:32:40.986 "fast_io_fail_timeout_sec": 0, 00:32:40.986 "disable_auto_failback": false, 00:32:40.986 "generate_uuids": false, 00:32:40.986 "transport_tos": 0, 00:32:40.986 "nvme_error_stat": false, 00:32:40.986 "rdma_srq_size": 0, 00:32:40.986 "io_path_stat": false, 00:32:40.986 "allow_accel_sequence": false, 00:32:40.986 "rdma_max_cq_size": 0, 00:32:40.986 "rdma_cm_event_timeout_ms": 0, 00:32:40.986 "dhchap_digests": [ 00:32:40.986 "sha256", 00:32:40.986 "sha384", 00:32:40.986 "sha512" 00:32:40.986 ], 00:32:40.986 "dhchap_dhgroups": [ 00:32:40.986 "null", 00:32:40.986 "ffdhe2048", 00:32:40.986 "ffdhe3072", 00:32:40.986 "ffdhe4096", 00:32:40.986 "ffdhe6144", 00:32:40.986 "ffdhe8192" 00:32:40.986 ] 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "bdev_nvme_set_hotplug", 00:32:40.986 "params": { 00:32:40.986 "period_us": 100000, 00:32:40.986 "enable": false 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "bdev_wait_for_examine" 00:32:40.986 } 00:32:40.986 ] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "scsi", 00:32:40.986 "config": null 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "scheduler", 00:32:40.986 "config": [ 00:32:40.986 { 00:32:40.986 "method": "framework_set_scheduler", 00:32:40.986 "params": { 00:32:40.986 "name": "static" 00:32:40.986 } 00:32:40.986 } 00:32:40.986 ] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "vhost_scsi", 00:32:40.986 "config": [] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "vhost_blk", 00:32:40.986 "config": [] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "ublk", 00:32:40.986 "config": [] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "nbd", 00:32:40.986 "config": [] 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "subsystem": "nvmf", 00:32:40.986 "config": [ 00:32:40.986 { 00:32:40.986 "method": "nvmf_set_config", 00:32:40.986 "params": { 00:32:40.986 "discovery_filter": "match_any", 00:32:40.986 "admin_cmd_passthru": { 00:32:40.986 "identify_ctrlr": false 00:32:40.986 }, 00:32:40.986 "dhchap_digests": [ 00:32:40.986 "sha256", 00:32:40.986 "sha384", 00:32:40.986 "sha512" 00:32:40.986 ], 00:32:40.986 "dhchap_dhgroups": [ 00:32:40.986 "null", 00:32:40.986 "ffdhe2048", 00:32:40.986 "ffdhe3072", 00:32:40.986 "ffdhe4096", 00:32:40.986 "ffdhe6144", 00:32:40.986 "ffdhe8192" 00:32:40.986 ] 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "nvmf_set_max_subsystems", 00:32:40.986 "params": { 00:32:40.986 "max_subsystems": 1024 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "nvmf_set_crdt", 00:32:40.986 "params": { 00:32:40.986 "crdt1": 0, 00:32:40.986 "crdt2": 0, 00:32:40.986 "crdt3": 0 00:32:40.986 } 00:32:40.986 }, 00:32:40.986 { 00:32:40.986 "method": "nvmf_create_transport", 00:32:40.986 "params": { 00:32:40.986 "trtype": "TCP", 00:32:40.986 "max_queue_depth": 128, 00:32:40.987 "max_io_qpairs_per_ctrlr": 127, 00:32:40.987 "in_capsule_data_size": 4096, 00:32:40.987 "max_io_size": 131072, 00:32:40.987 "io_unit_size": 131072, 00:32:40.987 "max_aq_depth": 128, 00:32:40.987 "num_shared_buffers": 511, 00:32:40.987 "buf_cache_size": 4294967295, 00:32:40.987 "dif_insert_or_strip": false, 00:32:40.987 "zcopy": false, 00:32:40.987 "c2h_success": true, 00:32:40.987 "sock_priority": 0, 00:32:40.987 "abort_timeout_sec": 1, 00:32:40.987 "ack_timeout": 0, 00:32:40.987 "data_wr_pool_size": 0 00:32:40.987 } 00:32:40.987 } 00:32:40.987 ] 00:32:40.987 }, 00:32:40.987 { 00:32:40.987 "subsystem": "iscsi", 00:32:40.987 "config": [ 00:32:40.987 { 00:32:40.987 "method": "iscsi_set_options", 00:32:40.987 "params": { 00:32:40.987 "node_base": "iqn.2016-06.io.spdk", 00:32:40.987 "max_sessions": 128, 00:32:40.987 "max_connections_per_session": 2, 00:32:40.987 "max_queue_depth": 64, 00:32:40.987 "default_time2wait": 2, 00:32:40.987 "default_time2retain": 20, 00:32:40.987 "first_burst_length": 8192, 00:32:40.987 "immediate_data": true, 00:32:40.987 "allow_duplicated_isid": false, 00:32:40.987 "error_recovery_level": 0, 00:32:40.987 "nop_timeout": 60, 00:32:40.987 "nop_in_interval": 30, 00:32:40.987 "disable_chap": false, 00:32:40.987 "require_chap": false, 00:32:40.987 "mutual_chap": false, 00:32:40.987 "chap_group": 0, 00:32:40.987 "max_large_datain_per_connection": 64, 00:32:40.987 "max_r2t_per_connection": 4, 00:32:40.987 "pdu_pool_size": 36864, 00:32:40.987 "immediate_data_pool_size": 16384, 00:32:40.987 "data_out_pool_size": 2048 00:32:40.987 } 00:32:40.987 } 00:32:40.987 ] 00:32:40.987 } 00:32:40.987 ] 00:32:40.987 } 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58147 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58147 ']' 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58147 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.987 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58147 00:32:41.246 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:41.246 killing process with pid 58147 00:32:41.246 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:41.246 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58147' 00:32:41.246 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58147 00:32:41.246 17:29:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58147 00:32:43.777 17:29:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58204 00:32:43.777 17:29:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:43.777 17:29:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58204 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58204 ']' 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58204 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58204 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.044 killing process with pid 58204 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58204' 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58204 00:32:49.044 17:29:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58204 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:32:51.577 00:32:51.577 real 0m11.759s 00:32:51.577 user 0m11.135s 00:32:51.577 sys 0m0.967s 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 ************************************ 00:32:51.577 END TEST skip_rpc_with_json 00:32:51.577 ************************************ 00:32:51.577 17:29:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:32:51.577 17:29:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.577 17:29:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.577 17:29:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 ************************************ 00:32:51.577 START TEST skip_rpc_with_delay 00:32:51.577 ************************************ 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:32:51.577 17:29:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:32:51.577 [2024-11-26 17:29:52.014174] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.577 00:32:51.577 real 0m0.184s 00:32:51.577 user 0m0.092s 00:32:51.577 sys 0m0.091s 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.577 17:29:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 ************************************ 00:32:51.577 END TEST skip_rpc_with_delay 00:32:51.577 ************************************ 00:32:51.577 17:29:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:32:51.577 17:29:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:32:51.577 17:29:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:32:51.577 17:29:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:51.577 17:29:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.577 17:29:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:51.577 ************************************ 00:32:51.577 START TEST exit_on_failed_rpc_init 00:32:51.577 ************************************ 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58332 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58332 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58332 ']' 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.577 17:29:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:51.835 [2024-11-26 17:29:52.272377] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:51.835 [2024-11-26 17:29:52.272521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58332 ] 00:32:51.836 [2024-11-26 17:29:52.456146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.094 [2024-11-26 17:29:52.579253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.029 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:32:53.030 17:29:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:32:53.030 [2024-11-26 17:29:53.621105] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:53.030 [2024-11-26 17:29:53.621235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 00:32:53.288 [2024-11-26 17:29:53.808506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:53.288 [2024-11-26 17:29:53.927341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.288 [2024-11-26 17:29:53.927441] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:53.288 [2024-11-26 17:29:53.927458] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:53.288 [2024-11-26 17:29:53.927478] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58332 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58332 ']' 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58332 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.547 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58332 00:32:53.805 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:53.805 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:53.805 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58332' 00:32:53.805 killing process with pid 58332 00:32:53.805 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58332 00:32:53.805 17:29:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58332 00:32:56.338 00:32:56.338 real 0m4.509s 00:32:56.338 user 0m4.860s 00:32:56.338 sys 0m0.651s 00:32:56.339 17:29:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.339 17:29:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:32:56.339 ************************************ 00:32:56.339 END TEST exit_on_failed_rpc_init 00:32:56.339 ************************************ 00:32:56.339 17:29:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:32:56.339 ************************************ 00:32:56.339 END TEST skip_rpc 00:32:56.339 ************************************ 00:32:56.339 00:32:56.339 real 0m24.916s 00:32:56.339 user 0m23.706s 00:32:56.339 sys 0m2.476s 00:32:56.339 17:29:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.339 17:29:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:56.339 17:29:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:32:56.339 17:29:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.339 17:29:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.339 17:29:56 -- common/autotest_common.sh@10 -- # set +x 00:32:56.339 ************************************ 00:32:56.339 START TEST rpc_client 00:32:56.339 ************************************ 00:32:56.339 17:29:56 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:32:56.339 * Looking for test storage... 00:32:56.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:32:56.339 17:29:56 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:56.339 17:29:56 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:32:56.339 17:29:56 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.339 17:29:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:56.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.339 --rc genhtml_branch_coverage=1 00:32:56.339 --rc genhtml_function_coverage=1 00:32:56.339 --rc genhtml_legend=1 00:32:56.339 --rc geninfo_all_blocks=1 00:32:56.339 --rc geninfo_unexecuted_blocks=1 00:32:56.339 00:32:56.339 ' 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:56.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.339 --rc genhtml_branch_coverage=1 00:32:56.339 --rc genhtml_function_coverage=1 00:32:56.339 --rc genhtml_legend=1 00:32:56.339 --rc geninfo_all_blocks=1 00:32:56.339 --rc geninfo_unexecuted_blocks=1 00:32:56.339 00:32:56.339 ' 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:56.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.339 --rc genhtml_branch_coverage=1 00:32:56.339 --rc genhtml_function_coverage=1 00:32:56.339 --rc genhtml_legend=1 00:32:56.339 --rc geninfo_all_blocks=1 00:32:56.339 --rc geninfo_unexecuted_blocks=1 00:32:56.339 00:32:56.339 ' 00:32:56.339 17:29:57 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:56.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.339 --rc genhtml_branch_coverage=1 00:32:56.339 --rc genhtml_function_coverage=1 00:32:56.339 --rc genhtml_legend=1 00:32:56.339 --rc geninfo_all_blocks=1 00:32:56.339 --rc geninfo_unexecuted_blocks=1 00:32:56.339 00:32:56.339 ' 00:32:56.339 17:29:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:32:56.598 OK 00:32:56.598 17:29:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:32:56.598 00:32:56.598 real 0m0.307s 00:32:56.598 user 0m0.150s 00:32:56.598 sys 0m0.171s 00:32:56.598 17:29:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.598 17:29:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:32:56.598 ************************************ 00:32:56.598 END TEST rpc_client 00:32:56.598 ************************************ 00:32:56.598 17:29:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:32:56.598 17:29:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.598 17:29:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.598 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:32:56.598 ************************************ 00:32:56.598 START TEST json_config 00:32:56.598 ************************************ 00:32:56.598 17:29:57 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:32:56.598 17:29:57 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:56.598 17:29:57 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:32:56.598 17:29:57 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.858 17:29:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.858 17:29:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.858 17:29:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.858 17:29:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.858 17:29:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.858 17:29:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:32:56.858 17:29:57 json_config -- scripts/common.sh@345 -- # : 1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.858 17:29:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.858 17:29:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@353 -- # local d=1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.858 17:29:57 json_config -- scripts/common.sh@355 -- # echo 1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.858 17:29:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@353 -- # local d=2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.858 17:29:57 json_config -- scripts/common.sh@355 -- # echo 2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.858 17:29:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.858 17:29:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.858 17:29:57 json_config -- scripts/common.sh@368 -- # return 0 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:56.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.858 --rc genhtml_branch_coverage=1 00:32:56.858 --rc genhtml_function_coverage=1 00:32:56.858 --rc genhtml_legend=1 00:32:56.858 --rc geninfo_all_blocks=1 00:32:56.858 --rc geninfo_unexecuted_blocks=1 00:32:56.858 00:32:56.858 ' 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:56.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.858 --rc genhtml_branch_coverage=1 00:32:56.858 --rc genhtml_function_coverage=1 00:32:56.858 --rc genhtml_legend=1 00:32:56.858 --rc geninfo_all_blocks=1 00:32:56.858 --rc geninfo_unexecuted_blocks=1 00:32:56.858 00:32:56.858 ' 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:56.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.858 --rc genhtml_branch_coverage=1 00:32:56.858 --rc genhtml_function_coverage=1 00:32:56.858 --rc genhtml_legend=1 00:32:56.858 --rc geninfo_all_blocks=1 00:32:56.858 --rc geninfo_unexecuted_blocks=1 00:32:56.858 00:32:56.858 ' 00:32:56.858 17:29:57 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:56.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.858 --rc genhtml_branch_coverage=1 00:32:56.858 --rc genhtml_function_coverage=1 00:32:56.858 --rc genhtml_legend=1 00:32:56.858 --rc geninfo_all_blocks=1 00:32:56.858 --rc geninfo_unexecuted_blocks=1 00:32:56.858 00:32:56.858 ' 00:32:56.858 17:29:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:56.858 17:29:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.858 17:29:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.858 17:29:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.858 17:29:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.858 17:29:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.858 17:29:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.858 17:29:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.858 17:29:57 json_config -- paths/export.sh@5 -- # export PATH 00:32:56.858 17:29:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@51 -- # : 0 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:56.858 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.858 17:29:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:32:56.859 WARNING: No tests are enabled so not running JSON configuration tests 00:32:56.859 17:29:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:32:56.859 00:32:56.859 real 0m0.240s 00:32:56.859 user 0m0.148s 00:32:56.859 sys 0m0.092s 00:32:56.859 17:29:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.859 17:29:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 ************************************ 00:32:56.859 END TEST json_config 00:32:56.859 ************************************ 00:32:56.859 17:29:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:32:56.859 17:29:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:56.859 17:29:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.859 17:29:57 -- common/autotest_common.sh@10 -- # set +x 00:32:56.859 ************************************ 00:32:56.859 START TEST json_config_extra_key 00:32:56.859 ************************************ 00:32:56.859 17:29:57 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.119 --rc genhtml_branch_coverage=1 00:32:57.119 --rc genhtml_function_coverage=1 00:32:57.119 --rc genhtml_legend=1 00:32:57.119 --rc geninfo_all_blocks=1 00:32:57.119 --rc geninfo_unexecuted_blocks=1 00:32:57.119 00:32:57.119 ' 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.119 --rc genhtml_branch_coverage=1 00:32:57.119 --rc genhtml_function_coverage=1 00:32:57.119 --rc genhtml_legend=1 00:32:57.119 --rc geninfo_all_blocks=1 00:32:57.119 --rc geninfo_unexecuted_blocks=1 00:32:57.119 00:32:57.119 ' 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.119 --rc genhtml_branch_coverage=1 00:32:57.119 --rc genhtml_function_coverage=1 00:32:57.119 --rc genhtml_legend=1 00:32:57.119 --rc geninfo_all_blocks=1 00:32:57.119 --rc geninfo_unexecuted_blocks=1 00:32:57.119 00:32:57.119 ' 00:32:57.119 17:29:57 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.119 --rc genhtml_branch_coverage=1 00:32:57.119 --rc genhtml_function_coverage=1 00:32:57.119 --rc genhtml_legend=1 00:32:57.119 --rc geninfo_all_blocks=1 00:32:57.119 --rc geninfo_unexecuted_blocks=1 00:32:57.119 00:32:57.119 ' 00:32:57.119 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1e9ca998-9bad-4879-8e46-bbaba251cb9e 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.119 17:29:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.119 17:29:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.119 17:29:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.120 17:29:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.120 17:29:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.120 17:29:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:32:57.120 17:29:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.120 17:29:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:32:57.120 INFO: launching applications... 00:32:57.120 17:29:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:32:57.120 Waiting for target to run... 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58566 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58566 /var/tmp/spdk_tgt.sock 00:32:57.120 17:29:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58566 ']' 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.120 17:29:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:32:57.379 [2024-11-26 17:29:57.845674] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:32:57.379 [2024-11-26 17:29:57.846028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58566 ] 00:32:57.636 [2024-11-26 17:29:58.251248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.894 [2024-11-26 17:29:58.361863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.460 00:32:58.461 INFO: shutting down applications... 00:32:58.461 17:29:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.461 17:29:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:32:58.461 17:29:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:32:58.461 17:29:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58566 ]] 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58566 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:32:58.461 17:29:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:59.028 17:29:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:59.028 17:29:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:59.029 17:29:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:32:59.029 17:29:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:32:59.595 17:30:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:32:59.595 17:30:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:32:59.595 17:30:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:32:59.595 17:30:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:33:00.162 17:30:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:33:00.162 17:30:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:33:00.162 17:30:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:33:00.162 17:30:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:33:00.421 17:30:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:33:00.680 17:30:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:33:00.680 17:30:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:33:00.680 17:30:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:33:00.938 17:30:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:33:00.938 17:30:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:33:00.938 17:30:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:33:00.938 17:30:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58566 00:33:01.505 SPDK target shutdown done 00:33:01.505 Success 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:33:01.505 17:30:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:33:01.505 17:30:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:33:01.505 00:33:01.505 real 0m4.641s 00:33:01.505 user 0m4.109s 00:33:01.505 sys 0m0.631s 00:33:01.505 ************************************ 00:33:01.505 END TEST json_config_extra_key 00:33:01.505 ************************************ 00:33:01.505 17:30:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.505 17:30:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:33:01.505 17:30:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:33:01.505 17:30:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.505 17:30:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.505 17:30:02 -- common/autotest_common.sh@10 -- # set +x 00:33:01.844 ************************************ 00:33:01.844 START TEST alias_rpc 00:33:01.844 ************************************ 00:33:01.844 17:30:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:33:01.844 * Looking for test storage... 00:33:01.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:33:01.844 17:30:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.844 17:30:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.844 17:30:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:01.844 17:30:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:01.844 17:30:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.844 17:30:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.844 17:30:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.844 17:30:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.844 17:30:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:33:01.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.845 17:30:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:01.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.845 --rc genhtml_branch_coverage=1 00:33:01.845 --rc genhtml_function_coverage=1 00:33:01.845 --rc genhtml_legend=1 00:33:01.845 --rc geninfo_all_blocks=1 00:33:01.845 --rc geninfo_unexecuted_blocks=1 00:33:01.845 00:33:01.845 ' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:01.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.845 --rc genhtml_branch_coverage=1 00:33:01.845 --rc genhtml_function_coverage=1 00:33:01.845 --rc genhtml_legend=1 00:33:01.845 --rc geninfo_all_blocks=1 00:33:01.845 --rc geninfo_unexecuted_blocks=1 00:33:01.845 00:33:01.845 ' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:01.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.845 --rc genhtml_branch_coverage=1 00:33:01.845 --rc genhtml_function_coverage=1 00:33:01.845 --rc genhtml_legend=1 00:33:01.845 --rc geninfo_all_blocks=1 00:33:01.845 --rc geninfo_unexecuted_blocks=1 00:33:01.845 00:33:01.845 ' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:01.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.845 --rc genhtml_branch_coverage=1 00:33:01.845 --rc genhtml_function_coverage=1 00:33:01.845 --rc genhtml_legend=1 00:33:01.845 --rc geninfo_all_blocks=1 00:33:01.845 --rc geninfo_unexecuted_blocks=1 00:33:01.845 00:33:01.845 ' 00:33:01.845 17:30:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:33:01.845 17:30:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58677 00:33:01.845 17:30:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58677 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58677 ']' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.845 17:30:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:01.845 17:30:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:02.117 [2024-11-26 17:30:02.542913] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:02.117 [2024-11-26 17:30:02.543310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58677 ] 00:33:02.117 [2024-11-26 17:30:02.725827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.375 [2024-11-26 17:30:02.849773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.307 17:30:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:03.307 17:30:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:03.307 17:30:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:33:03.566 17:30:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58677 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58677 ']' 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58677 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58677 00:33:03.566 killing process with pid 58677 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58677' 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 58677 00:33:03.566 17:30:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 58677 00:33:06.098 ************************************ 00:33:06.098 END TEST alias_rpc 00:33:06.098 ************************************ 00:33:06.098 00:33:06.098 real 0m4.309s 00:33:06.098 user 0m4.266s 00:33:06.098 sys 0m0.633s 00:33:06.098 17:30:06 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.098 17:30:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:06.098 17:30:06 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:33:06.098 17:30:06 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:33:06.098 17:30:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:06.098 17:30:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.098 17:30:06 -- common/autotest_common.sh@10 -- # set +x 00:33:06.098 ************************************ 00:33:06.098 START TEST spdkcli_tcp 00:33:06.098 ************************************ 00:33:06.098 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:33:06.098 * Looking for test storage... 00:33:06.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:06.098 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:06.098 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:06.098 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.376 17:30:06 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.376 --rc genhtml_branch_coverage=1 00:33:06.376 --rc genhtml_function_coverage=1 00:33:06.376 --rc genhtml_legend=1 00:33:06.376 --rc geninfo_all_blocks=1 00:33:06.376 --rc geninfo_unexecuted_blocks=1 00:33:06.376 00:33:06.376 ' 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.376 --rc genhtml_branch_coverage=1 00:33:06.376 --rc genhtml_function_coverage=1 00:33:06.376 --rc genhtml_legend=1 00:33:06.376 --rc geninfo_all_blocks=1 00:33:06.376 --rc geninfo_unexecuted_blocks=1 00:33:06.376 00:33:06.376 ' 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.376 --rc genhtml_branch_coverage=1 00:33:06.376 --rc genhtml_function_coverage=1 00:33:06.376 --rc genhtml_legend=1 00:33:06.376 --rc geninfo_all_blocks=1 00:33:06.376 --rc geninfo_unexecuted_blocks=1 00:33:06.376 00:33:06.376 ' 00:33:06.376 17:30:06 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:06.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.376 --rc genhtml_branch_coverage=1 00:33:06.376 --rc genhtml_function_coverage=1 00:33:06.376 --rc genhtml_legend=1 00:33:06.376 --rc geninfo_all_blocks=1 00:33:06.376 --rc geninfo_unexecuted_blocks=1 00:33:06.376 00:33:06.376 ' 00:33:06.376 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:06.376 17:30:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58784 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:06.377 17:30:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58784 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58784 ']' 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.377 17:30:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:06.377 [2024-11-26 17:30:06.938096] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:06.377 [2024-11-26 17:30:06.938236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ] 00:33:06.636 [2024-11-26 17:30:07.120747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:06.636 [2024-11-26 17:30:07.238463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.636 [2024-11-26 17:30:07.238532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.571 17:30:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:07.571 17:30:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:07.571 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58807 00:33:07.571 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:33:07.571 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:33:07.830 [ 00:33:07.830 "bdev_malloc_delete", 00:33:07.830 "bdev_malloc_create", 00:33:07.830 "bdev_null_resize", 00:33:07.830 "bdev_null_delete", 00:33:07.830 "bdev_null_create", 00:33:07.830 "bdev_nvme_cuse_unregister", 00:33:07.830 "bdev_nvme_cuse_register", 00:33:07.830 "bdev_opal_new_user", 00:33:07.830 "bdev_opal_set_lock_state", 00:33:07.830 "bdev_opal_delete", 00:33:07.830 "bdev_opal_get_info", 00:33:07.830 "bdev_opal_create", 00:33:07.830 "bdev_nvme_opal_revert", 00:33:07.830 "bdev_nvme_opal_init", 00:33:07.830 "bdev_nvme_send_cmd", 00:33:07.830 "bdev_nvme_set_keys", 00:33:07.830 "bdev_nvme_get_path_iostat", 00:33:07.830 "bdev_nvme_get_mdns_discovery_info", 00:33:07.830 "bdev_nvme_stop_mdns_discovery", 00:33:07.830 "bdev_nvme_start_mdns_discovery", 00:33:07.830 "bdev_nvme_set_multipath_policy", 00:33:07.830 "bdev_nvme_set_preferred_path", 00:33:07.830 "bdev_nvme_get_io_paths", 00:33:07.830 "bdev_nvme_remove_error_injection", 00:33:07.830 "bdev_nvme_add_error_injection", 00:33:07.830 "bdev_nvme_get_discovery_info", 00:33:07.830 "bdev_nvme_stop_discovery", 00:33:07.830 "bdev_nvme_start_discovery", 00:33:07.830 "bdev_nvme_get_controller_health_info", 00:33:07.830 "bdev_nvme_disable_controller", 00:33:07.830 "bdev_nvme_enable_controller", 00:33:07.830 "bdev_nvme_reset_controller", 00:33:07.830 "bdev_nvme_get_transport_statistics", 00:33:07.830 "bdev_nvme_apply_firmware", 00:33:07.830 "bdev_nvme_detach_controller", 00:33:07.830 "bdev_nvme_get_controllers", 00:33:07.830 "bdev_nvme_attach_controller", 00:33:07.830 "bdev_nvme_set_hotplug", 00:33:07.830 "bdev_nvme_set_options", 00:33:07.830 "bdev_passthru_delete", 00:33:07.830 "bdev_passthru_create", 00:33:07.830 "bdev_lvol_set_parent_bdev", 00:33:07.830 "bdev_lvol_set_parent", 00:33:07.830 "bdev_lvol_check_shallow_copy", 00:33:07.830 "bdev_lvol_start_shallow_copy", 00:33:07.830 "bdev_lvol_grow_lvstore", 00:33:07.830 "bdev_lvol_get_lvols", 00:33:07.830 "bdev_lvol_get_lvstores", 00:33:07.830 "bdev_lvol_delete", 00:33:07.830 "bdev_lvol_set_read_only", 00:33:07.830 "bdev_lvol_resize", 00:33:07.830 "bdev_lvol_decouple_parent", 00:33:07.830 "bdev_lvol_inflate", 00:33:07.830 "bdev_lvol_rename", 00:33:07.830 "bdev_lvol_clone_bdev", 00:33:07.830 "bdev_lvol_clone", 00:33:07.830 "bdev_lvol_snapshot", 00:33:07.830 "bdev_lvol_create", 00:33:07.830 "bdev_lvol_delete_lvstore", 00:33:07.830 "bdev_lvol_rename_lvstore", 00:33:07.830 "bdev_lvol_create_lvstore", 00:33:07.830 "bdev_raid_set_options", 00:33:07.830 "bdev_raid_remove_base_bdev", 00:33:07.830 "bdev_raid_add_base_bdev", 00:33:07.830 "bdev_raid_delete", 00:33:07.830 "bdev_raid_create", 00:33:07.830 "bdev_raid_get_bdevs", 00:33:07.830 "bdev_error_inject_error", 00:33:07.830 "bdev_error_delete", 00:33:07.830 "bdev_error_create", 00:33:07.830 "bdev_split_delete", 00:33:07.830 "bdev_split_create", 00:33:07.830 "bdev_delay_delete", 00:33:07.830 "bdev_delay_create", 00:33:07.830 "bdev_delay_update_latency", 00:33:07.830 "bdev_zone_block_delete", 00:33:07.830 "bdev_zone_block_create", 00:33:07.830 "blobfs_create", 00:33:07.830 "blobfs_detect", 00:33:07.830 "blobfs_set_cache_size", 00:33:07.830 "bdev_xnvme_delete", 00:33:07.830 "bdev_xnvme_create", 00:33:07.830 "bdev_aio_delete", 00:33:07.830 "bdev_aio_rescan", 00:33:07.830 "bdev_aio_create", 00:33:07.830 "bdev_ftl_set_property", 00:33:07.830 "bdev_ftl_get_properties", 00:33:07.830 "bdev_ftl_get_stats", 00:33:07.830 "bdev_ftl_unmap", 00:33:07.830 "bdev_ftl_unload", 00:33:07.830 "bdev_ftl_delete", 00:33:07.830 "bdev_ftl_load", 00:33:07.830 "bdev_ftl_create", 00:33:07.830 "bdev_virtio_attach_controller", 00:33:07.830 "bdev_virtio_scsi_get_devices", 00:33:07.830 "bdev_virtio_detach_controller", 00:33:07.830 "bdev_virtio_blk_set_hotplug", 00:33:07.830 "bdev_iscsi_delete", 00:33:07.830 "bdev_iscsi_create", 00:33:07.830 "bdev_iscsi_set_options", 00:33:07.830 "accel_error_inject_error", 00:33:07.830 "ioat_scan_accel_module", 00:33:07.830 "dsa_scan_accel_module", 00:33:07.830 "iaa_scan_accel_module", 00:33:07.830 "keyring_file_remove_key", 00:33:07.830 "keyring_file_add_key", 00:33:07.830 "keyring_linux_set_options", 00:33:07.830 "fsdev_aio_delete", 00:33:07.830 "fsdev_aio_create", 00:33:07.830 "iscsi_get_histogram", 00:33:07.830 "iscsi_enable_histogram", 00:33:07.830 "iscsi_set_options", 00:33:07.830 "iscsi_get_auth_groups", 00:33:07.830 "iscsi_auth_group_remove_secret", 00:33:07.830 "iscsi_auth_group_add_secret", 00:33:07.830 "iscsi_delete_auth_group", 00:33:07.830 "iscsi_create_auth_group", 00:33:07.830 "iscsi_set_discovery_auth", 00:33:07.830 "iscsi_get_options", 00:33:07.830 "iscsi_target_node_request_logout", 00:33:07.830 "iscsi_target_node_set_redirect", 00:33:07.830 "iscsi_target_node_set_auth", 00:33:07.830 "iscsi_target_node_add_lun", 00:33:07.830 "iscsi_get_stats", 00:33:07.830 "iscsi_get_connections", 00:33:07.830 "iscsi_portal_group_set_auth", 00:33:07.830 "iscsi_start_portal_group", 00:33:07.830 "iscsi_delete_portal_group", 00:33:07.830 "iscsi_create_portal_group", 00:33:07.830 "iscsi_get_portal_groups", 00:33:07.830 "iscsi_delete_target_node", 00:33:07.830 "iscsi_target_node_remove_pg_ig_maps", 00:33:07.830 "iscsi_target_node_add_pg_ig_maps", 00:33:07.830 "iscsi_create_target_node", 00:33:07.830 "iscsi_get_target_nodes", 00:33:07.830 "iscsi_delete_initiator_group", 00:33:07.830 "iscsi_initiator_group_remove_initiators", 00:33:07.830 "iscsi_initiator_group_add_initiators", 00:33:07.830 "iscsi_create_initiator_group", 00:33:07.830 "iscsi_get_initiator_groups", 00:33:07.830 "nvmf_set_crdt", 00:33:07.830 "nvmf_set_config", 00:33:07.830 "nvmf_set_max_subsystems", 00:33:07.830 "nvmf_stop_mdns_prr", 00:33:07.830 "nvmf_publish_mdns_prr", 00:33:07.831 "nvmf_subsystem_get_listeners", 00:33:07.831 "nvmf_subsystem_get_qpairs", 00:33:07.831 "nvmf_subsystem_get_controllers", 00:33:07.831 "nvmf_get_stats", 00:33:07.831 "nvmf_get_transports", 00:33:07.831 "nvmf_create_transport", 00:33:07.831 "nvmf_get_targets", 00:33:07.831 "nvmf_delete_target", 00:33:07.831 "nvmf_create_target", 00:33:07.831 "nvmf_subsystem_allow_any_host", 00:33:07.831 "nvmf_subsystem_set_keys", 00:33:07.831 "nvmf_subsystem_remove_host", 00:33:07.831 "nvmf_subsystem_add_host", 00:33:07.831 "nvmf_ns_remove_host", 00:33:07.831 "nvmf_ns_add_host", 00:33:07.831 "nvmf_subsystem_remove_ns", 00:33:07.831 "nvmf_subsystem_set_ns_ana_group", 00:33:07.831 "nvmf_subsystem_add_ns", 00:33:07.831 "nvmf_subsystem_listener_set_ana_state", 00:33:07.831 "nvmf_discovery_get_referrals", 00:33:07.831 "nvmf_discovery_remove_referral", 00:33:07.831 "nvmf_discovery_add_referral", 00:33:07.831 "nvmf_subsystem_remove_listener", 00:33:07.831 "nvmf_subsystem_add_listener", 00:33:07.831 "nvmf_delete_subsystem", 00:33:07.831 "nvmf_create_subsystem", 00:33:07.831 "nvmf_get_subsystems", 00:33:07.831 "env_dpdk_get_mem_stats", 00:33:07.831 "nbd_get_disks", 00:33:07.831 "nbd_stop_disk", 00:33:07.831 "nbd_start_disk", 00:33:07.831 "ublk_recover_disk", 00:33:07.831 "ublk_get_disks", 00:33:07.831 "ublk_stop_disk", 00:33:07.831 "ublk_start_disk", 00:33:07.831 "ublk_destroy_target", 00:33:07.831 "ublk_create_target", 00:33:07.831 "virtio_blk_create_transport", 00:33:07.831 "virtio_blk_get_transports", 00:33:07.831 "vhost_controller_set_coalescing", 00:33:07.831 "vhost_get_controllers", 00:33:07.831 "vhost_delete_controller", 00:33:07.831 "vhost_create_blk_controller", 00:33:07.831 "vhost_scsi_controller_remove_target", 00:33:07.831 "vhost_scsi_controller_add_target", 00:33:07.831 "vhost_start_scsi_controller", 00:33:07.831 "vhost_create_scsi_controller", 00:33:07.831 "thread_set_cpumask", 00:33:07.831 "scheduler_set_options", 00:33:07.831 "framework_get_governor", 00:33:07.831 "framework_get_scheduler", 00:33:07.831 "framework_set_scheduler", 00:33:07.831 "framework_get_reactors", 00:33:07.831 "thread_get_io_channels", 00:33:07.831 "thread_get_pollers", 00:33:07.831 "thread_get_stats", 00:33:07.831 "framework_monitor_context_switch", 00:33:07.831 "spdk_kill_instance", 00:33:07.831 "log_enable_timestamps", 00:33:07.831 "log_get_flags", 00:33:07.831 "log_clear_flag", 00:33:07.831 "log_set_flag", 00:33:07.831 "log_get_level", 00:33:07.831 "log_set_level", 00:33:07.831 "log_get_print_level", 00:33:07.831 "log_set_print_level", 00:33:07.831 "framework_enable_cpumask_locks", 00:33:07.831 "framework_disable_cpumask_locks", 00:33:07.831 "framework_wait_init", 00:33:07.831 "framework_start_init", 00:33:07.831 "scsi_get_devices", 00:33:07.831 "bdev_get_histogram", 00:33:07.831 "bdev_enable_histogram", 00:33:07.831 "bdev_set_qos_limit", 00:33:07.831 "bdev_set_qd_sampling_period", 00:33:07.831 "bdev_get_bdevs", 00:33:07.831 "bdev_reset_iostat", 00:33:07.831 "bdev_get_iostat", 00:33:07.831 "bdev_examine", 00:33:07.831 "bdev_wait_for_examine", 00:33:07.831 "bdev_set_options", 00:33:07.831 "accel_get_stats", 00:33:07.831 "accel_set_options", 00:33:07.831 "accel_set_driver", 00:33:07.831 "accel_crypto_key_destroy", 00:33:07.831 "accel_crypto_keys_get", 00:33:07.831 "accel_crypto_key_create", 00:33:07.831 "accel_assign_opc", 00:33:07.831 "accel_get_module_info", 00:33:07.831 "accel_get_opc_assignments", 00:33:07.831 "vmd_rescan", 00:33:07.831 "vmd_remove_device", 00:33:07.831 "vmd_enable", 00:33:07.831 "sock_get_default_impl", 00:33:07.831 "sock_set_default_impl", 00:33:07.831 "sock_impl_set_options", 00:33:07.831 "sock_impl_get_options", 00:33:07.831 "iobuf_get_stats", 00:33:07.831 "iobuf_set_options", 00:33:07.831 "keyring_get_keys", 00:33:07.831 "framework_get_pci_devices", 00:33:07.831 "framework_get_config", 00:33:07.831 "framework_get_subsystems", 00:33:07.831 "fsdev_set_opts", 00:33:07.831 "fsdev_get_opts", 00:33:07.831 "trace_get_info", 00:33:07.831 "trace_get_tpoint_group_mask", 00:33:07.831 "trace_disable_tpoint_group", 00:33:07.831 "trace_enable_tpoint_group", 00:33:07.831 "trace_clear_tpoint_mask", 00:33:07.831 "trace_set_tpoint_mask", 00:33:07.831 "notify_get_notifications", 00:33:07.831 "notify_get_types", 00:33:07.831 "spdk_get_version", 00:33:07.831 "rpc_get_methods" 00:33:07.831 ] 00:33:07.831 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:07.831 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:07.831 17:30:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58784 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58784 ']' 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58784 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58784 00:33:07.831 killing process with pid 58784 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58784' 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58784 00:33:07.831 17:30:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58784 00:33:10.361 ************************************ 00:33:10.361 END TEST spdkcli_tcp 00:33:10.361 ************************************ 00:33:10.361 00:33:10.361 real 0m4.286s 00:33:10.361 user 0m7.566s 00:33:10.361 sys 0m0.710s 00:33:10.361 17:30:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:10.361 17:30:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 17:30:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:33:10.361 17:30:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.361 17:30:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.361 17:30:10 -- common/autotest_common.sh@10 -- # set +x 00:33:10.361 ************************************ 00:33:10.361 START TEST dpdk_mem_utility 00:33:10.361 ************************************ 00:33:10.361 17:30:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:33:10.620 * Looking for test storage... 00:33:10.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.620 17:30:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.620 --rc genhtml_branch_coverage=1 00:33:10.620 --rc genhtml_function_coverage=1 00:33:10.620 --rc genhtml_legend=1 00:33:10.620 --rc geninfo_all_blocks=1 00:33:10.620 --rc geninfo_unexecuted_blocks=1 00:33:10.620 00:33:10.620 ' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.620 --rc genhtml_branch_coverage=1 00:33:10.620 --rc genhtml_function_coverage=1 00:33:10.620 --rc genhtml_legend=1 00:33:10.620 --rc geninfo_all_blocks=1 00:33:10.620 --rc geninfo_unexecuted_blocks=1 00:33:10.620 00:33:10.620 ' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.620 --rc genhtml_branch_coverage=1 00:33:10.620 --rc genhtml_function_coverage=1 00:33:10.620 --rc genhtml_legend=1 00:33:10.620 --rc geninfo_all_blocks=1 00:33:10.620 --rc geninfo_unexecuted_blocks=1 00:33:10.620 00:33:10.620 ' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.620 --rc genhtml_branch_coverage=1 00:33:10.620 --rc genhtml_function_coverage=1 00:33:10.620 --rc genhtml_legend=1 00:33:10.620 --rc geninfo_all_blocks=1 00:33:10.620 --rc geninfo_unexecuted_blocks=1 00:33:10.620 00:33:10.620 ' 00:33:10.620 17:30:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:33:10.620 17:30:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58912 00:33:10.620 17:30:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:10.620 17:30:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58912 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.620 17:30:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:33:10.620 [2024-11-26 17:30:11.287277] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:10.620 [2024-11-26 17:30:11.287648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:33:10.879 [2024-11-26 17:30:11.469901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.138 [2024-11-26 17:30:11.586582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.076 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.076 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:33:12.076 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:33:12.076 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:33:12.076 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.076 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:33:12.076 { 00:33:12.076 "filename": "/tmp/spdk_mem_dump.txt" 00:33:12.076 } 00:33:12.076 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.076 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:33:12.076 DPDK memory size 824.000000 MiB in 1 heap(s) 00:33:12.076 1 heaps totaling size 824.000000 MiB 00:33:12.076 size: 824.000000 MiB heap id: 0 00:33:12.076 end heaps---------- 00:33:12.076 9 mempools totaling size 603.782043 MiB 00:33:12.076 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:33:12.076 size: 158.602051 MiB name: PDU_data_out_Pool 00:33:12.076 size: 100.555481 MiB name: bdev_io_58912 00:33:12.076 size: 50.003479 MiB name: msgpool_58912 00:33:12.076 size: 36.509338 MiB name: fsdev_io_58912 00:33:12.076 size: 21.763794 MiB name: PDU_Pool 00:33:12.076 size: 19.513306 MiB name: SCSI_TASK_Pool 00:33:12.076 size: 4.133484 MiB name: evtpool_58912 00:33:12.076 size: 0.026123 MiB name: Session_Pool 00:33:12.076 end mempools------- 00:33:12.076 6 memzones totaling size 4.142822 MiB 00:33:12.076 size: 1.000366 MiB name: RG_ring_0_58912 00:33:12.076 size: 1.000366 MiB name: RG_ring_1_58912 00:33:12.076 size: 1.000366 MiB name: RG_ring_4_58912 00:33:12.076 size: 1.000366 MiB name: RG_ring_5_58912 00:33:12.076 size: 0.125366 MiB name: RG_ring_2_58912 00:33:12.076 size: 0.015991 MiB name: RG_ring_3_58912 00:33:12.076 end memzones------- 00:33:12.076 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:33:12.076 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:33:12.076 list of free elements. size: 16.780396 MiB 00:33:12.076 element at address: 0x200006400000 with size: 1.995972 MiB 00:33:12.076 element at address: 0x20000a600000 with size: 1.995972 MiB 00:33:12.076 element at address: 0x200003e00000 with size: 1.991028 MiB 00:33:12.076 element at address: 0x200019500040 with size: 0.999939 MiB 00:33:12.076 element at address: 0x200019900040 with size: 0.999939 MiB 00:33:12.076 element at address: 0x200019a00000 with size: 0.999084 MiB 00:33:12.076 element at address: 0x200032600000 with size: 0.994324 MiB 00:33:12.076 element at address: 0x200000400000 with size: 0.992004 MiB 00:33:12.076 element at address: 0x200019200000 with size: 0.959656 MiB 00:33:12.076 element at address: 0x200019d00040 with size: 0.936401 MiB 00:33:12.076 element at address: 0x200000200000 with size: 0.716980 MiB 00:33:12.076 element at address: 0x20001b400000 with size: 0.561707 MiB 00:33:12.076 element at address: 0x200000c00000 with size: 0.489197 MiB 00:33:12.076 element at address: 0x200019600000 with size: 0.487976 MiB 00:33:12.076 element at address: 0x200019e00000 with size: 0.485413 MiB 00:33:12.076 element at address: 0x200012c00000 with size: 0.433472 MiB 00:33:12.076 element at address: 0x200028800000 with size: 0.390442 MiB 00:33:12.076 element at address: 0x200000800000 with size: 0.350891 MiB 00:33:12.076 list of standard malloc elements. size: 199.288696 MiB 00:33:12.076 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:33:12.076 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:33:12.076 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:33:12.076 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:33:12.076 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:33:12.076 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:33:12.076 element at address: 0x200019deff40 with size: 0.062683 MiB 00:33:12.076 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:33:12.076 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:33:12.076 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:33:12.076 element at address: 0x200012bff040 with size: 0.000305 MiB 00:33:12.076 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:33:12.076 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:33:12.076 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200000cff000 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff180 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff280 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff380 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff480 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff580 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff680 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff780 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff880 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bff980 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200019affc40 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:33:12.077 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:33:12.078 element at address: 0x200028863f40 with size: 0.000244 MiB 00:33:12.078 element at address: 0x200028864040 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886af80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b080 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b180 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b280 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b380 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b480 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b580 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b680 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b780 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b880 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886b980 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886be80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c080 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c180 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c280 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c380 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c480 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c580 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c680 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c780 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c880 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886c980 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d080 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d180 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d280 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d380 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d480 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d580 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d680 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d780 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d880 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886d980 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886da80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886db80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886de80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886df80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e080 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e180 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e280 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e380 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e480 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e580 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e680 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e780 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e880 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886e980 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f080 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f180 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f280 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f380 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f480 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f580 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f680 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f780 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f880 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886f980 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:33:12.078 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:33:12.078 list of memzone associated elements. size: 607.930908 MiB 00:33:12.078 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:33:12.078 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:33:12.078 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:33:12.078 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:33:12.078 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:33:12.078 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58912_0 00:33:12.078 element at address: 0x200000dff340 with size: 48.003113 MiB 00:33:12.078 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58912_0 00:33:12.078 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:33:12.078 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58912_0 00:33:12.078 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:33:12.078 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:33:12.078 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:33:12.078 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:33:12.078 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:33:12.078 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58912_0 00:33:12.078 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:33:12.078 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58912 00:33:12.078 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:33:12.078 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58912 00:33:12.078 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:33:12.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:33:12.078 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:33:12.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:33:12.078 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:33:12.078 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:33:12.078 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:33:12.078 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:33:12.078 element at address: 0x200000cff100 with size: 1.000549 MiB 00:33:12.078 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58912 00:33:12.078 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:33:12.078 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58912 00:33:12.078 element at address: 0x200019affd40 with size: 1.000549 MiB 00:33:12.078 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58912 00:33:12.078 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:33:12.078 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58912 00:33:12.079 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:33:12.079 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58912 00:33:12.079 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:33:12.079 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58912 00:33:12.079 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:33:12.079 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:33:12.079 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:33:12.079 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:33:12.079 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:33:12.079 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:33:12.079 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:33:12.079 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58912 00:33:12.079 element at address: 0x20000085df80 with size: 0.125549 MiB 00:33:12.079 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58912 00:33:12.079 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:33:12.079 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:33:12.079 element at address: 0x200028864140 with size: 0.023804 MiB 00:33:12.079 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:33:12.079 element at address: 0x200000859d40 with size: 0.016174 MiB 00:33:12.079 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58912 00:33:12.079 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:33:12.079 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:33:12.079 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:33:12.079 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58912 00:33:12.079 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:33:12.079 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58912 00:33:12.079 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:33:12.079 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58912 00:33:12.079 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:33:12.079 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:33:12.079 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:33:12.079 17:30:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58912 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58912 ']' 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58912 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58912 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58912' 00:33:12.079 killing process with pid 58912 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58912 00:33:12.079 17:30:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58912 00:33:14.612 00:33:14.612 real 0m4.145s 00:33:14.612 user 0m3.974s 00:33:14.612 sys 0m0.656s 00:33:14.612 17:30:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.612 ************************************ 00:33:14.612 END TEST dpdk_mem_utility 00:33:14.612 ************************************ 00:33:14.612 17:30:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:33:14.612 17:30:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:33:14.612 17:30:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.612 17:30:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.612 17:30:15 -- common/autotest_common.sh@10 -- # set +x 00:33:14.612 ************************************ 00:33:14.612 START TEST event 00:33:14.612 ************************************ 00:33:14.612 17:30:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:33:14.612 * Looking for test storage... 00:33:14.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:33:14.612 17:30:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:14.612 17:30:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:33:14.612 17:30:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:14.871 17:30:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:14.871 17:30:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:14.871 17:30:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:14.871 17:30:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:14.871 17:30:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:33:14.871 17:30:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:33:14.871 17:30:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:33:14.871 17:30:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:33:14.871 17:30:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:33:14.871 17:30:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:33:14.871 17:30:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:33:14.871 17:30:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:14.871 17:30:15 event -- scripts/common.sh@344 -- # case "$op" in 00:33:14.871 17:30:15 event -- scripts/common.sh@345 -- # : 1 00:33:14.871 17:30:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:14.871 17:30:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:14.871 17:30:15 event -- scripts/common.sh@365 -- # decimal 1 00:33:14.871 17:30:15 event -- scripts/common.sh@353 -- # local d=1 00:33:14.871 17:30:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:14.871 17:30:15 event -- scripts/common.sh@355 -- # echo 1 00:33:14.871 17:30:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:33:14.871 17:30:15 event -- scripts/common.sh@366 -- # decimal 2 00:33:14.871 17:30:15 event -- scripts/common.sh@353 -- # local d=2 00:33:14.871 17:30:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:14.871 17:30:15 event -- scripts/common.sh@355 -- # echo 2 00:33:14.871 17:30:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:33:14.871 17:30:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:14.871 17:30:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:14.871 17:30:15 event -- scripts/common.sh@368 -- # return 0 00:33:14.871 17:30:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:14.871 17:30:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.871 --rc genhtml_branch_coverage=1 00:33:14.871 --rc genhtml_function_coverage=1 00:33:14.871 --rc genhtml_legend=1 00:33:14.871 --rc geninfo_all_blocks=1 00:33:14.871 --rc geninfo_unexecuted_blocks=1 00:33:14.871 00:33:14.871 ' 00:33:14.871 17:30:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:14.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.871 --rc genhtml_branch_coverage=1 00:33:14.872 --rc genhtml_function_coverage=1 00:33:14.872 --rc genhtml_legend=1 00:33:14.872 --rc geninfo_all_blocks=1 00:33:14.872 --rc geninfo_unexecuted_blocks=1 00:33:14.872 00:33:14.872 ' 00:33:14.872 17:30:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.872 --rc genhtml_branch_coverage=1 00:33:14.872 --rc genhtml_function_coverage=1 00:33:14.872 --rc genhtml_legend=1 00:33:14.872 --rc geninfo_all_blocks=1 00:33:14.872 --rc geninfo_unexecuted_blocks=1 00:33:14.872 00:33:14.872 ' 00:33:14.872 17:30:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:14.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:14.872 --rc genhtml_branch_coverage=1 00:33:14.872 --rc genhtml_function_coverage=1 00:33:14.872 --rc genhtml_legend=1 00:33:14.872 --rc geninfo_all_blocks=1 00:33:14.872 --rc geninfo_unexecuted_blocks=1 00:33:14.872 00:33:14.872 ' 00:33:14.872 17:30:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:14.872 17:30:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:33:14.872 17:30:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:33:14.872 17:30:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:33:14.872 17:30:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.872 17:30:15 event -- common/autotest_common.sh@10 -- # set +x 00:33:14.872 ************************************ 00:33:14.872 START TEST event_perf 00:33:14.872 ************************************ 00:33:14.872 17:30:15 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:33:14.872 Running I/O for 1 seconds...[2024-11-26 17:30:15.467099] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:14.872 [2024-11-26 17:30:15.467343] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:33:15.131 [2024-11-26 17:30:15.650159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:15.131 [2024-11-26 17:30:15.775625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.131 [2024-11-26 17:30:15.775752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.131 [2024-11-26 17:30:15.775900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.131 [2024-11-26 17:30:15.775931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.511 Running I/O for 1 seconds... 00:33:16.511 lcore 0: 199318 00:33:16.511 lcore 1: 199316 00:33:16.511 lcore 2: 199315 00:33:16.511 lcore 3: 199316 00:33:16.511 done. 00:33:16.511 00:33:16.511 ************************************ 00:33:16.511 END TEST event_perf 00:33:16.511 ************************************ 00:33:16.511 real 0m1.606s 00:33:16.511 user 0m4.347s 00:33:16.511 sys 0m0.134s 00:33:16.511 17:30:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.511 17:30:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:33:16.511 17:30:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:33:16.511 17:30:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:16.511 17:30:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.511 17:30:17 event -- common/autotest_common.sh@10 -- # set +x 00:33:16.511 ************************************ 00:33:16.511 START TEST event_reactor 00:33:16.511 ************************************ 00:33:16.511 17:30:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:33:16.511 [2024-11-26 17:30:17.138764] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:16.511 [2024-11-26 17:30:17.138880] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59060 ] 00:33:16.770 [2024-11-26 17:30:17.322285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.770 [2024-11-26 17:30:17.436892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.148 test_start 00:33:18.148 oneshot 00:33:18.148 tick 100 00:33:18.148 tick 100 00:33:18.148 tick 250 00:33:18.148 tick 100 00:33:18.148 tick 100 00:33:18.148 tick 100 00:33:18.148 tick 250 00:33:18.148 tick 500 00:33:18.148 tick 100 00:33:18.148 tick 100 00:33:18.148 tick 250 00:33:18.148 tick 100 00:33:18.148 tick 100 00:33:18.148 test_end 00:33:18.148 00:33:18.148 real 0m1.582s 00:33:18.148 user 0m1.363s 00:33:18.148 sys 0m0.112s 00:33:18.148 17:30:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.148 17:30:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:33:18.148 ************************************ 00:33:18.148 END TEST event_reactor 00:33:18.148 ************************************ 00:33:18.148 17:30:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:33:18.148 17:30:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:18.148 17:30:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.148 17:30:18 event -- common/autotest_common.sh@10 -- # set +x 00:33:18.148 ************************************ 00:33:18.148 START TEST event_reactor_perf 00:33:18.148 ************************************ 00:33:18.148 17:30:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:33:18.148 [2024-11-26 17:30:18.796288] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:18.148 [2024-11-26 17:30:18.796407] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:33:18.408 [2024-11-26 17:30:18.980077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.408 [2024-11-26 17:30:19.099006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.799 test_start 00:33:19.799 test_end 00:33:19.799 Performance: 370837 events per second 00:33:19.799 00:33:19.799 real 0m1.583s 00:33:19.799 user 0m1.369s 00:33:19.799 sys 0m0.105s 00:33:19.799 ************************************ 00:33:19.799 END TEST event_reactor_perf 00:33:19.799 ************************************ 00:33:19.799 17:30:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.799 17:30:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:33:19.799 17:30:20 event -- event/event.sh@49 -- # uname -s 00:33:19.799 17:30:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:33:19.799 17:30:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:33:19.799 17:30:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:19.799 17:30:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.799 17:30:20 event -- common/autotest_common.sh@10 -- # set +x 00:33:19.799 ************************************ 00:33:19.799 START TEST event_scheduler 00:33:19.799 ************************************ 00:33:19.799 17:30:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:33:20.059 * Looking for test storage... 00:33:20.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.059 17:30:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.059 --rc genhtml_branch_coverage=1 00:33:20.059 --rc genhtml_function_coverage=1 00:33:20.059 --rc genhtml_legend=1 00:33:20.059 --rc geninfo_all_blocks=1 00:33:20.059 --rc geninfo_unexecuted_blocks=1 00:33:20.059 00:33:20.059 ' 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.059 --rc genhtml_branch_coverage=1 00:33:20.059 --rc genhtml_function_coverage=1 00:33:20.059 --rc genhtml_legend=1 00:33:20.059 --rc geninfo_all_blocks=1 00:33:20.059 --rc geninfo_unexecuted_blocks=1 00:33:20.059 00:33:20.059 ' 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.059 --rc genhtml_branch_coverage=1 00:33:20.059 --rc genhtml_function_coverage=1 00:33:20.059 --rc genhtml_legend=1 00:33:20.059 --rc geninfo_all_blocks=1 00:33:20.059 --rc geninfo_unexecuted_blocks=1 00:33:20.059 00:33:20.059 ' 00:33:20.059 17:30:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:20.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.059 --rc genhtml_branch_coverage=1 00:33:20.059 --rc genhtml_function_coverage=1 00:33:20.059 --rc genhtml_legend=1 00:33:20.059 --rc geninfo_all_blocks=1 00:33:20.059 --rc geninfo_unexecuted_blocks=1 00:33:20.059 00:33:20.059 ' 00:33:20.059 17:30:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:33:20.059 17:30:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59173 00:33:20.059 17:30:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:33:20.060 17:30:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:33:20.060 17:30:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59173 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.060 17:30:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:33:20.319 [2024-11-26 17:30:20.771731] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:20.319 [2024-11-26 17:30:20.772052] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:33:20.319 [2024-11-26 17:30:20.958565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.578 [2024-11-26 17:30:21.092158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.578 [2024-11-26 17:30:21.092284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.578 [2024-11-26 17:30:21.092421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.578 [2024-11-26 17:30:21.092453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:33:21.146 17:30:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:33:21.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:33:21.146 POWER: Cannot set governor of lcore 0 to userspace 00:33:21.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:33:21.146 POWER: Cannot set governor of lcore 0 to performance 00:33:21.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:33:21.146 POWER: Cannot set governor of lcore 0 to userspace 00:33:21.146 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:33:21.146 POWER: Cannot set governor of lcore 0 to userspace 00:33:21.146 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:33:21.146 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:33:21.146 POWER: Unable to set Power Management Environment for lcore 0 00:33:21.146 [2024-11-26 17:30:21.626285] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:33:21.146 [2024-11-26 17:30:21.626344] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:33:21.146 [2024-11-26 17:30:21.626381] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:33:21.146 [2024-11-26 17:30:21.626471] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:33:21.146 [2024-11-26 17:30:21.626520] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:33:21.146 [2024-11-26 17:30:21.626556] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.146 17:30:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.146 17:30:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 [2024-11-26 17:30:21.981754] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:33:21.405 17:30:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.405 17:30:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:33:21.405 17:30:21 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:21.405 17:30:21 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:21.405 17:30:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 ************************************ 00:33:21.405 START TEST scheduler_create_thread 00:33:21.405 ************************************ 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 2 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 3 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 4 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.405 5 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:33:21.405 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.406 6 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.406 7 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.406 8 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.406 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.666 9 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:21.666 10 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.666 17:30:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:23.042 17:30:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.042 17:30:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:33:23.042 17:30:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:33:23.042 17:30:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.042 17:30:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:23.609 17:30:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.609 17:30:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:33:23.609 17:30:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.609 17:30:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:24.546 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.546 17:30:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:33:24.546 17:30:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:33:24.546 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.546 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:25.479 ************************************ 00:33:25.479 END TEST scheduler_create_thread 00:33:25.479 ************************************ 00:33:25.479 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.479 00:33:25.479 real 0m3.886s 00:33:25.479 user 0m0.030s 00:33:25.479 sys 0m0.007s 00:33:25.479 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.479 17:30:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:33:25.479 17:30:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:33:25.479 17:30:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59173 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59173 ']' 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59173 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59173 00:33:25.479 killing process with pid 59173 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:25.479 17:30:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59173' 00:33:25.480 17:30:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59173 00:33:25.480 17:30:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59173 00:33:25.738 [2024-11-26 17:30:26.264979] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:33:27.160 00:33:27.160 real 0m7.046s 00:33:27.160 user 0m14.399s 00:33:27.160 sys 0m0.570s 00:33:27.160 17:30:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.160 ************************************ 00:33:27.160 END TEST event_scheduler 00:33:27.160 ************************************ 00:33:27.160 17:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:33:27.160 17:30:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:33:27.160 17:30:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:33:27.160 17:30:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:27.160 17:30:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.160 17:30:27 event -- common/autotest_common.sh@10 -- # set +x 00:33:27.160 ************************************ 00:33:27.160 START TEST app_repeat 00:33:27.160 ************************************ 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59300 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59300' 00:33:27.160 Process app_repeat pid: 59300 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:33:27.160 spdk_app_start Round 0 00:33:27.160 17:30:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59300 /var/tmp/spdk-nbd.sock 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59300 ']' 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.160 17:30:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:27.160 [2024-11-26 17:30:27.613796] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:27.160 [2024-11-26 17:30:27.613911] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59300 ] 00:33:27.160 [2024-11-26 17:30:27.800917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:27.419 [2024-11-26 17:30:27.925013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.419 [2024-11-26 17:30:27.925047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.985 17:30:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.985 17:30:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:27.985 17:30:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:28.243 Malloc0 00:33:28.243 17:30:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:28.502 Malloc1 00:33:28.502 17:30:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:28.502 17:30:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:33:28.767 /dev/nbd0 00:33:28.767 17:30:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:28.767 17:30:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:28.767 1+0 records in 00:33:28.767 1+0 records out 00:33:28.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270217 s, 15.2 MB/s 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:28.767 17:30:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:28.767 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:28.767 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:28.767 17:30:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:33:29.029 /dev/nbd1 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:29.029 1+0 records in 00:33:29.029 1+0 records out 00:33:29.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425671 s, 9.6 MB/s 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:29.029 17:30:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:29.029 17:30:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:29.289 { 00:33:29.289 "nbd_device": "/dev/nbd0", 00:33:29.289 "bdev_name": "Malloc0" 00:33:29.289 }, 00:33:29.289 { 00:33:29.289 "nbd_device": "/dev/nbd1", 00:33:29.289 "bdev_name": "Malloc1" 00:33:29.289 } 00:33:29.289 ]' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:29.289 { 00:33:29.289 "nbd_device": "/dev/nbd0", 00:33:29.289 "bdev_name": "Malloc0" 00:33:29.289 }, 00:33:29.289 { 00:33:29.289 "nbd_device": "/dev/nbd1", 00:33:29.289 "bdev_name": "Malloc1" 00:33:29.289 } 00:33:29.289 ]' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:29.289 /dev/nbd1' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:29.289 /dev/nbd1' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:29.289 256+0 records in 00:33:29.289 256+0 records out 00:33:29.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106264 s, 98.7 MB/s 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:29.289 256+0 records in 00:33:29.289 256+0 records out 00:33:29.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279996 s, 37.4 MB/s 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:29.289 17:30:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:29.548 256+0 records in 00:33:29.548 256+0 records out 00:33:29.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348506 s, 30.1 MB/s 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:29.548 17:30:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:29.548 17:30:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:29.807 17:30:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:30.065 17:30:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:30.065 17:30:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:30.065 17:30:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:30.323 17:30:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:30.324 17:30:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:30.324 17:30:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:30.582 17:30:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:31.962 [2024-11-26 17:30:32.377127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:31.962 [2024-11-26 17:30:32.490534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.962 [2024-11-26 17:30:32.490557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.221 [2024-11-26 17:30:32.684127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:32.221 [2024-11-26 17:30:32.684189] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:33.599 spdk_app_start Round 1 00:33:33.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:33.599 17:30:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:33:33.599 17:30:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:33:33.599 17:30:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59300 /var/tmp/spdk-nbd.sock 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59300 ']' 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.599 17:30:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:33.858 17:30:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.858 17:30:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:33.858 17:30:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:34.117 Malloc0 00:33:34.117 17:30:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:34.376 Malloc1 00:33:34.376 17:30:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:34.376 17:30:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:33:34.635 /dev/nbd0 00:33:34.635 17:30:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:34.635 17:30:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:34.635 1+0 records in 00:33:34.635 1+0 records out 00:33:34.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003521 s, 11.6 MB/s 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:34.635 17:30:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:34.635 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:34.635 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:34.635 17:30:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:33:34.894 /dev/nbd1 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:34.894 1+0 records in 00:33:34.894 1+0 records out 00:33:34.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370075 s, 11.1 MB/s 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:34.894 17:30:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:34.894 17:30:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:35.153 { 00:33:35.153 "nbd_device": "/dev/nbd0", 00:33:35.153 "bdev_name": "Malloc0" 00:33:35.153 }, 00:33:35.153 { 00:33:35.153 "nbd_device": "/dev/nbd1", 00:33:35.153 "bdev_name": "Malloc1" 00:33:35.153 } 00:33:35.153 ]' 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:35.153 { 00:33:35.153 "nbd_device": "/dev/nbd0", 00:33:35.153 "bdev_name": "Malloc0" 00:33:35.153 }, 00:33:35.153 { 00:33:35.153 "nbd_device": "/dev/nbd1", 00:33:35.153 "bdev_name": "Malloc1" 00:33:35.153 } 00:33:35.153 ]' 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:35.153 /dev/nbd1' 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:35.153 /dev/nbd1' 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:35.153 17:30:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:35.413 256+0 records in 00:33:35.413 256+0 records out 00:33:35.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126065 s, 83.2 MB/s 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:35.413 256+0 records in 00:33:35.413 256+0 records out 00:33:35.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299096 s, 35.1 MB/s 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:35.413 256+0 records in 00:33:35.413 256+0 records out 00:33:35.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317538 s, 33.0 MB/s 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:35.413 17:30:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:35.680 17:30:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:35.938 17:30:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:36.197 17:30:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:36.197 17:30:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:36.766 17:30:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:38.143 [2024-11-26 17:30:38.521890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:38.143 [2024-11-26 17:30:38.669859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.144 [2024-11-26 17:30:38.669875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.402 [2024-11-26 17:30:38.908103] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:38.402 [2024-11-26 17:30:38.908194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:39.781 spdk_app_start Round 2 00:33:39.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:39.781 17:30:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:33:39.781 17:30:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:33:39.781 17:30:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59300 /var/tmp/spdk-nbd.sock 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59300 ']' 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.781 17:30:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:40.040 17:30:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.040 17:30:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:40.040 17:30:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:40.299 Malloc0 00:33:40.299 17:30:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:33:40.566 Malloc1 00:33:40.566 17:30:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:40.566 17:30:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:40.566 17:30:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:40.567 17:30:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:33:40.567 /dev/nbd0 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:40.837 1+0 records in 00:33:40.837 1+0 records out 00:33:40.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421181 s, 9.7 MB/s 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:33:40.837 /dev/nbd1 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:40.837 17:30:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:40.837 17:30:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:33:41.097 1+0 records in 00:33:41.097 1+0 records out 00:33:41.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432456 s, 9.5 MB/s 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:41.097 17:30:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:41.097 { 00:33:41.097 "nbd_device": "/dev/nbd0", 00:33:41.097 "bdev_name": "Malloc0" 00:33:41.097 }, 00:33:41.097 { 00:33:41.097 "nbd_device": "/dev/nbd1", 00:33:41.097 "bdev_name": "Malloc1" 00:33:41.097 } 00:33:41.097 ]' 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:41.097 { 00:33:41.097 "nbd_device": "/dev/nbd0", 00:33:41.097 "bdev_name": "Malloc0" 00:33:41.097 }, 00:33:41.097 { 00:33:41.097 "nbd_device": "/dev/nbd1", 00:33:41.097 "bdev_name": "Malloc1" 00:33:41.097 } 00:33:41.097 ]' 00:33:41.097 17:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:41.357 /dev/nbd1' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:41.357 /dev/nbd1' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:33:41.357 256+0 records in 00:33:41.357 256+0 records out 00:33:41.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134199 s, 78.1 MB/s 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:41.357 256+0 records in 00:33:41.357 256+0 records out 00:33:41.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294994 s, 35.5 MB/s 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:41.357 256+0 records in 00:33:41.357 256+0 records out 00:33:41.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320133 s, 32.8 MB/s 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:33:41.357 17:30:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:41.358 17:30:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:41.615 17:30:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:41.873 17:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:42.130 17:30:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:33:42.130 17:30:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:33:42.387 17:30:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:33:43.757 [2024-11-26 17:30:44.228597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:43.757 [2024-11-26 17:30:44.334526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.757 [2024-11-26 17:30:44.334548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.017 [2024-11-26 17:30:44.527794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:33:44.017 [2024-11-26 17:30:44.527856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:33:45.388 17:30:46 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59300 /var/tmp/spdk-nbd.sock 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59300 ']' 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:45.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.388 17:30:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:33:45.645 17:30:46 event.app_repeat -- event/event.sh@39 -- # killprocess 59300 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59300 ']' 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59300 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59300 00:33:45.645 killing process with pid 59300 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59300' 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59300 00:33:45.645 17:30:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59300 00:33:47.018 spdk_app_start is called in Round 0. 00:33:47.018 Shutdown signal received, stop current app iteration 00:33:47.018 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:33:47.018 spdk_app_start is called in Round 1. 00:33:47.018 Shutdown signal received, stop current app iteration 00:33:47.018 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:33:47.018 spdk_app_start is called in Round 2. 00:33:47.018 Shutdown signal received, stop current app iteration 00:33:47.018 Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 reinitialization... 00:33:47.019 spdk_app_start is called in Round 3. 00:33:47.019 Shutdown signal received, stop current app iteration 00:33:47.019 17:30:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:33:47.019 17:30:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:33:47.019 00:33:47.019 real 0m19.850s 00:33:47.019 user 0m42.243s 00:33:47.019 sys 0m3.310s 00:33:47.019 17:30:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.019 17:30:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:33:47.019 ************************************ 00:33:47.019 END TEST app_repeat 00:33:47.019 ************************************ 00:33:47.019 17:30:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:33:47.019 17:30:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:33:47.019 17:30:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:47.019 17:30:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.019 17:30:47 event -- common/autotest_common.sh@10 -- # set +x 00:33:47.019 ************************************ 00:33:47.019 START TEST cpu_locks 00:33:47.019 ************************************ 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:33:47.019 * Looking for test storage... 00:33:47.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.019 17:30:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.019 --rc genhtml_branch_coverage=1 00:33:47.019 --rc genhtml_function_coverage=1 00:33:47.019 --rc genhtml_legend=1 00:33:47.019 --rc geninfo_all_blocks=1 00:33:47.019 --rc geninfo_unexecuted_blocks=1 00:33:47.019 00:33:47.019 ' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.019 --rc genhtml_branch_coverage=1 00:33:47.019 --rc genhtml_function_coverage=1 00:33:47.019 --rc genhtml_legend=1 00:33:47.019 --rc geninfo_all_blocks=1 00:33:47.019 --rc geninfo_unexecuted_blocks=1 00:33:47.019 00:33:47.019 ' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.019 --rc genhtml_branch_coverage=1 00:33:47.019 --rc genhtml_function_coverage=1 00:33:47.019 --rc genhtml_legend=1 00:33:47.019 --rc geninfo_all_blocks=1 00:33:47.019 --rc geninfo_unexecuted_blocks=1 00:33:47.019 00:33:47.019 ' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:47.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.019 --rc genhtml_branch_coverage=1 00:33:47.019 --rc genhtml_function_coverage=1 00:33:47.019 --rc genhtml_legend=1 00:33:47.019 --rc geninfo_all_blocks=1 00:33:47.019 --rc geninfo_unexecuted_blocks=1 00:33:47.019 00:33:47.019 ' 00:33:47.019 17:30:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:33:47.019 17:30:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:33:47.019 17:30:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:33:47.019 17:30:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.019 17:30:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:47.019 ************************************ 00:33:47.019 START TEST default_locks 00:33:47.019 ************************************ 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59750 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59750 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59750 ']' 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.019 17:30:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:47.278 [2024-11-26 17:30:47.813508] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:47.278 [2024-11-26 17:30:47.815679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59750 ] 00:33:47.537 [2024-11-26 17:30:48.021622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.537 [2024-11-26 17:30:48.175899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.913 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.913 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:33:48.913 17:30:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59750 00:33:48.913 17:30:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59750 00:33:48.913 17:30:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59750 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59750 ']' 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59750 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59750 00:33:49.172 killing process with pid 59750 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59750' 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59750 00:33:49.172 17:30:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59750 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59750 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59750 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59750 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59750 ']' 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.705 ERROR: process (pid: 59750) is no longer running 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:51.705 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59750) - No such process 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:51.705 00:33:51.705 real 0m4.588s 00:33:51.705 user 0m4.314s 00:33:51.705 sys 0m0.909s 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.705 17:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:33:51.705 ************************************ 00:33:51.705 END TEST default_locks 00:33:51.705 ************************************ 00:33:51.705 17:30:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:33:51.705 17:30:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:51.705 17:30:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.705 17:30:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:51.705 ************************************ 00:33:51.705 START TEST default_locks_via_rpc 00:33:51.705 ************************************ 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59832 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:51.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59832 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.705 17:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:51.963 [2024-11-26 17:30:52.499477] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:51.963 [2024-11-26 17:30:52.499645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:33:52.222 [2024-11-26 17:30:52.701801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.222 [2024-11-26 17:30:52.857558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59832 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59832 00:33:53.598 17:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:33:53.857 17:30:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59832 00:33:53.857 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:33:53.857 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59832 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59832 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:54.116 killing process with pid 59832 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59832' 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59832 00:33:54.116 17:30:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59832 00:33:57.408 ************************************ 00:33:57.408 END TEST default_locks_via_rpc 00:33:57.408 ************************************ 00:33:57.408 00:33:57.408 real 0m4.987s 00:33:57.408 user 0m4.777s 00:33:57.408 sys 0m0.967s 00:33:57.408 17:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.408 17:30:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:57.408 17:30:57 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:33:57.408 17:30:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:57.408 17:30:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.408 17:30:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:33:57.408 ************************************ 00:33:57.408 START TEST non_locking_app_on_locked_coremask 00:33:57.408 ************************************ 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59918 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59918 /var/tmp/spdk.sock 00:33:57.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59918 ']' 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.408 17:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:57.408 [2024-11-26 17:30:57.550947] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:57.408 [2024-11-26 17:30:57.551307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59918 ] 00:33:57.408 [2024-11-26 17:30:57.733298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.408 [2024-11-26 17:30:57.893344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59934 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59934 /var/tmp/spdk2.sock 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59934 ']' 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:33:58.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:58.351 17:30:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:33:58.610 [2024-11-26 17:30:59.133738] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:33:58.610 [2024-11-26 17:30:59.134514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:33:58.867 [2024-11-26 17:30:59.325746] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:33:58.867 [2024-11-26 17:30:59.325817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.127 [2024-11-26 17:30:59.662437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.653 17:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.653 17:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:34:01.653 17:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59918 00:34:01.653 17:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59918 00:34:01.653 17:31:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59918 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59918 ']' 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59918 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.220 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59918 00:34:02.221 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:02.221 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:02.221 killing process with pid 59918 00:34:02.221 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59918' 00:34:02.221 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59918 00:34:02.221 17:31:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59918 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59934 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59934 ']' 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59934 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59934 00:34:07.494 killing process with pid 59934 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59934' 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59934 00:34:07.494 17:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59934 00:34:09.395 ************************************ 00:34:09.395 END TEST non_locking_app_on_locked_coremask 00:34:09.395 ************************************ 00:34:09.395 00:34:09.395 real 0m12.531s 00:34:09.395 user 0m12.702s 00:34:09.395 sys 0m1.670s 00:34:09.395 17:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.395 17:31:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:09.395 17:31:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:34:09.395 17:31:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:09.395 17:31:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.395 17:31:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:34:09.395 ************************************ 00:34:09.395 START TEST locking_app_on_unlocked_coremask 00:34:09.395 ************************************ 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60095 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:34:09.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60095 /var/tmp/spdk.sock 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60095 ']' 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.395 17:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:09.653 [2024-11-26 17:31:10.141550] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:09.653 [2024-11-26 17:31:10.141682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60095 ] 00:34:09.653 [2024-11-26 17:31:10.323660] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:34:09.653 [2024-11-26 17:31:10.323715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.912 [2024-11-26 17:31:10.443113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60111 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60111 /var/tmp/spdk2.sock 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60111 ']' 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:34:10.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.849 17:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:10.849 [2024-11-26 17:31:11.463651] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:10.849 [2024-11-26 17:31:11.464000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:34:11.108 [2024-11-26 17:31:11.645284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.367 [2024-11-26 17:31:11.880903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.901 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.901 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:34:13.901 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60111 00:34:13.901 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60111 00:34:13.901 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60095 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60095 ']' 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60095 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60095 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.508 killing process with pid 60095 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60095' 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60095 00:34:14.508 17:31:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60095 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60111 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60111 ']' 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60111 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60111 00:34:19.777 killing process with pid 60111 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60111' 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60111 00:34:19.777 17:31:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60111 00:34:21.677 00:34:21.677 real 0m12.102s 00:34:21.677 user 0m12.438s 00:34:21.677 sys 0m1.411s 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:21.677 ************************************ 00:34:21.677 END TEST locking_app_on_unlocked_coremask 00:34:21.677 ************************************ 00:34:21.677 17:31:22 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:34:21.677 17:31:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:21.677 17:31:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.677 17:31:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:34:21.677 ************************************ 00:34:21.677 START TEST locking_app_on_locked_coremask 00:34:21.677 ************************************ 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60266 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60266 /var/tmp/spdk.sock 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60266 ']' 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:21.677 17:31:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:21.677 [2024-11-26 17:31:22.317444] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:21.677 [2024-11-26 17:31:22.317821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60266 ] 00:34:21.935 [2024-11-26 17:31:22.494675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.935 [2024-11-26 17:31:22.609991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60282 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60282 /var/tmp/spdk2.sock 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60282 /var/tmp/spdk2.sock 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60282 /var/tmp/spdk2.sock 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60282 ']' 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:34:22.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.873 17:31:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:23.133 [2024-11-26 17:31:23.626946] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:23.133 [2024-11-26 17:31:23.627155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:34:23.133 [2024-11-26 17:31:23.810161] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60266 has claimed it. 00:34:23.133 [2024-11-26 17:31:23.810229] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:34:23.701 ERROR: process (pid: 60282) is no longer running 00:34:23.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60282) - No such process 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60266 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60266 00:34:23.701 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60266 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60266 ']' 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60266 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60266 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60266' 00:34:23.961 killing process with pid 60266 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60266 00:34:23.961 17:31:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60266 00:34:26.495 00:34:26.495 real 0m4.833s 00:34:26.495 user 0m5.014s 00:34:26.495 sys 0m0.816s 00:34:26.495 17:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.495 ************************************ 00:34:26.495 END TEST locking_app_on_locked_coremask 00:34:26.495 17:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:26.495 ************************************ 00:34:26.495 17:31:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:34:26.495 17:31:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.495 17:31:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.495 17:31:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:34:26.495 ************************************ 00:34:26.495 START TEST locking_overlapped_coremask 00:34:26.495 ************************************ 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60351 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60351 /var/tmp/spdk.sock 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60351 ']' 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.495 17:31:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:26.754 [2024-11-26 17:31:27.219665] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:26.754 [2024-11-26 17:31:27.219992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60351 ] 00:34:26.754 [2024-11-26 17:31:27.404098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:27.014 [2024-11-26 17:31:27.520242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.014 [2024-11-26 17:31:27.520384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.014 [2024-11-26 17:31:27.520439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60375 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60375 /var/tmp/spdk2.sock 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60375 /var/tmp/spdk2.sock 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60375 /var/tmp/spdk2.sock 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60375 ']' 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:34:27.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.950 17:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:27.950 [2024-11-26 17:31:28.528949] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:27.950 [2024-11-26 17:31:28.529072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:34:28.209 [2024-11-26 17:31:28.714756] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60351 has claimed it. 00:34:28.209 [2024-11-26 17:31:28.714839] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:34:28.468 ERROR: process (pid: 60375) is no longer running 00:34:28.468 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60375) - No such process 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60351 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60351 ']' 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60351 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:34:28.468 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60351 00:34:28.729 killing process with pid 60351 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60351' 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60351 00:34:28.729 17:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60351 00:34:31.340 00:34:31.340 real 0m4.532s 00:34:31.340 user 0m12.254s 00:34:31.340 sys 0m0.638s 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.340 ************************************ 00:34:31.340 END TEST locking_overlapped_coremask 00:34:31.340 ************************************ 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:34:31.340 17:31:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:34:31.340 17:31:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:31.340 17:31:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.340 17:31:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:34:31.340 ************************************ 00:34:31.340 START TEST locking_overlapped_coremask_via_rpc 00:34:31.340 ************************************ 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60439 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60439 /var/tmp/spdk.sock 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60439 ']' 00:34:31.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.340 17:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:31.340 [2024-11-26 17:31:31.825824] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:31.340 [2024-11-26 17:31:31.825975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:34:31.340 [2024-11-26 17:31:32.009977] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:34:31.340 [2024-11-26 17:31:32.010026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:31.598 [2024-11-26 17:31:32.130370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.598 [2024-11-26 17:31:32.130656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.598 [2024-11-26 17:31:32.130719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60457 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60457 /var/tmp/spdk2.sock 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60457 ']' 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.535 17:31:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:32.535 [2024-11-26 17:31:33.079653] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:32.535 [2024-11-26 17:31:33.080354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:34:32.794 [2024-11-26 17:31:33.265574] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:34:32.794 [2024-11-26 17:31:33.265642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:33.052 [2024-11-26 17:31:33.536472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:33.052 [2024-11-26 17:31:33.539586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:33.052 [2024-11-26 17:31:33.539612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:34.957 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.957 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:34:34.957 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:34:34.957 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.957 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:35.217 [2024-11-26 17:31:35.663681] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60439 has claimed it. 00:34:35.217 request: 00:34:35.217 { 00:34:35.217 "method": "framework_enable_cpumask_locks", 00:34:35.217 "req_id": 1 00:34:35.217 } 00:34:35.217 Got JSON-RPC error response 00:34:35.217 response: 00:34:35.217 { 00:34:35.217 "code": -32603, 00:34:35.217 "message": "Failed to claim CPU core: 2" 00:34:35.217 } 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60439 /var/tmp/spdk.sock 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60439 ']' 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.217 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:35.476 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60457 /var/tmp/spdk2.sock 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60457 ']' 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:34:35.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.477 17:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.736 ************************************ 00:34:35.736 END TEST locking_overlapped_coremask_via_rpc 00:34:35.736 ************************************ 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:34:35.736 00:34:35.736 real 0m4.498s 00:34:35.736 user 0m1.350s 00:34:35.736 sys 0m0.238s 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.736 17:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:34:35.736 17:31:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:34:35.736 17:31:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60439 ]] 00:34:35.736 17:31:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60439 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60439 ']' 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60439 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60439 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:35.736 killing process with pid 60439 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60439' 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60439 00:34:35.736 17:31:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60439 00:34:38.298 17:31:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60457 ]] 00:34:38.298 17:31:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60457 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60457 ']' 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60457 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60457 00:34:38.298 killing process with pid 60457 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60457' 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60457 00:34:38.298 17:31:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60457 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60439 ]] 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60439 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60439 ']' 00:34:40.843 Process with pid 60439 is not found 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60439 00:34:40.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60439) - No such process 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60439 is not found' 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60457 ]] 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60457 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60457 ']' 00:34:40.843 Process with pid 60457 is not found 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60457 00:34:40.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60457) - No such process 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60457 is not found' 00:34:40.843 17:31:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:34:40.843 00:34:40.843 real 0m53.960s 00:34:40.843 user 1m30.071s 00:34:40.843 sys 0m7.954s 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.843 ************************************ 00:34:40.843 END TEST cpu_locks 00:34:40.843 ************************************ 00:34:40.843 17:31:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:34:40.843 ************************************ 00:34:40.843 END TEST event 00:34:40.843 ************************************ 00:34:40.843 00:34:40.843 real 1m26.330s 00:34:40.843 user 2m34.077s 00:34:40.843 sys 0m12.596s 00:34:40.843 17:31:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.843 17:31:41 event -- common/autotest_common.sh@10 -- # set +x 00:34:41.103 17:31:41 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:34:41.103 17:31:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.103 17:31:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.103 17:31:41 -- common/autotest_common.sh@10 -- # set +x 00:34:41.103 ************************************ 00:34:41.103 START TEST thread 00:34:41.103 ************************************ 00:34:41.103 17:31:41 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:34:41.103 * Looking for test storage... 00:34:41.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:34:41.103 17:31:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:41.103 17:31:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:34:41.103 17:31:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:41.103 17:31:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:41.103 17:31:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.103 17:31:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.103 17:31:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.103 17:31:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.103 17:31:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.103 17:31:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.103 17:31:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.103 17:31:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.103 17:31:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.103 17:31:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.103 17:31:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.103 17:31:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:34:41.103 17:31:41 thread -- scripts/common.sh@345 -- # : 1 00:34:41.361 17:31:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.361 17:31:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.361 17:31:41 thread -- scripts/common.sh@365 -- # decimal 1 00:34:41.361 17:31:41 thread -- scripts/common.sh@353 -- # local d=1 00:34:41.361 17:31:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.361 17:31:41 thread -- scripts/common.sh@355 -- # echo 1 00:34:41.361 17:31:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.361 17:31:41 thread -- scripts/common.sh@366 -- # decimal 2 00:34:41.361 17:31:41 thread -- scripts/common.sh@353 -- # local d=2 00:34:41.361 17:31:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.361 17:31:41 thread -- scripts/common.sh@355 -- # echo 2 00:34:41.361 17:31:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.361 17:31:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.361 17:31:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.361 17:31:41 thread -- scripts/common.sh@368 -- # return 0 00:34:41.361 17:31:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.361 17:31:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.361 --rc genhtml_branch_coverage=1 00:34:41.361 --rc genhtml_function_coverage=1 00:34:41.361 --rc genhtml_legend=1 00:34:41.361 --rc geninfo_all_blocks=1 00:34:41.361 --rc geninfo_unexecuted_blocks=1 00:34:41.361 00:34:41.361 ' 00:34:41.361 17:31:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:41.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.361 --rc genhtml_branch_coverage=1 00:34:41.361 --rc genhtml_function_coverage=1 00:34:41.361 --rc genhtml_legend=1 00:34:41.361 --rc geninfo_all_blocks=1 00:34:41.361 --rc geninfo_unexecuted_blocks=1 00:34:41.361 00:34:41.361 ' 00:34:41.362 17:31:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:41.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.362 --rc genhtml_branch_coverage=1 00:34:41.362 --rc genhtml_function_coverage=1 00:34:41.362 --rc genhtml_legend=1 00:34:41.362 --rc geninfo_all_blocks=1 00:34:41.362 --rc geninfo_unexecuted_blocks=1 00:34:41.362 00:34:41.362 ' 00:34:41.362 17:31:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:41.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.362 --rc genhtml_branch_coverage=1 00:34:41.362 --rc genhtml_function_coverage=1 00:34:41.362 --rc genhtml_legend=1 00:34:41.362 --rc geninfo_all_blocks=1 00:34:41.362 --rc geninfo_unexecuted_blocks=1 00:34:41.362 00:34:41.362 ' 00:34:41.362 17:31:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:34:41.362 17:31:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:34:41.362 17:31:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.362 17:31:41 thread -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 ************************************ 00:34:41.362 START TEST thread_poller_perf 00:34:41.362 ************************************ 00:34:41.362 17:31:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:34:41.362 [2024-11-26 17:31:41.869012] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:41.362 [2024-11-26 17:31:41.869257] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60662 ] 00:34:41.362 [2024-11-26 17:31:42.054951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:41.620 [2024-11-26 17:31:42.176472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.620 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:34:42.997 [2024-11-26T17:31:43.691Z] ====================================== 00:34:42.997 [2024-11-26T17:31:43.691Z] busy:2499428526 (cyc) 00:34:42.997 [2024-11-26T17:31:43.691Z] total_run_count: 375000 00:34:42.997 [2024-11-26T17:31:43.691Z] tsc_hz: 2490000000 (cyc) 00:34:42.997 [2024-11-26T17:31:43.691Z] ====================================== 00:34:42.997 [2024-11-26T17:31:43.691Z] poller_cost: 6665 (cyc), 2676 (nsec) 00:34:42.997 00:34:42.997 real 0m1.578s 00:34:42.997 user 0m1.366s 00:34:42.997 sys 0m0.104s 00:34:42.997 ************************************ 00:34:42.997 END TEST thread_poller_perf 00:34:42.997 ************************************ 00:34:42.997 17:31:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:42.997 17:31:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:34:42.997 17:31:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:34:42.997 17:31:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:34:42.998 17:31:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:42.998 17:31:43 thread -- common/autotest_common.sh@10 -- # set +x 00:34:42.998 ************************************ 00:34:42.998 START TEST thread_poller_perf 00:34:42.998 ************************************ 00:34:42.998 17:31:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:34:42.998 [2024-11-26 17:31:43.535214] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:42.998 [2024-11-26 17:31:43.535324] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:34:43.256 [2024-11-26 17:31:43.718354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.256 [2024-11-26 17:31:43.839751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.256 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:34:44.764 [2024-11-26T17:31:45.458Z] ====================================== 00:34:44.764 [2024-11-26T17:31:45.458Z] busy:2494497926 (cyc) 00:34:44.764 [2024-11-26T17:31:45.458Z] total_run_count: 5082000 00:34:44.764 [2024-11-26T17:31:45.458Z] tsc_hz: 2490000000 (cyc) 00:34:44.764 [2024-11-26T17:31:45.458Z] ====================================== 00:34:44.764 [2024-11-26T17:31:45.458Z] poller_cost: 490 (cyc), 196 (nsec) 00:34:44.764 00:34:44.764 real 0m1.584s 00:34:44.764 user 0m1.369s 00:34:44.764 sys 0m0.107s 00:34:44.764 17:31:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.764 ************************************ 00:34:44.764 END TEST thread_poller_perf 00:34:44.764 ************************************ 00:34:44.764 17:31:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:34:44.764 17:31:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:34:44.764 ************************************ 00:34:44.764 END TEST thread 00:34:44.764 ************************************ 00:34:44.764 00:34:44.764 real 0m3.554s 00:34:44.764 user 0m2.916s 00:34:44.764 sys 0m0.425s 00:34:44.764 17:31:45 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.764 17:31:45 thread -- common/autotest_common.sh@10 -- # set +x 00:34:44.764 17:31:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:34:44.764 17:31:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:44.764 17:31:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:44.764 17:31:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.764 17:31:45 -- common/autotest_common.sh@10 -- # set +x 00:34:44.764 ************************************ 00:34:44.764 START TEST app_cmdline 00:34:44.764 ************************************ 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:34:44.765 * Looking for test storage... 00:34:44.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.765 17:31:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.765 --rc genhtml_branch_coverage=1 00:34:44.765 --rc genhtml_function_coverage=1 00:34:44.765 --rc genhtml_legend=1 00:34:44.765 --rc geninfo_all_blocks=1 00:34:44.765 --rc geninfo_unexecuted_blocks=1 00:34:44.765 00:34:44.765 ' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.765 --rc genhtml_branch_coverage=1 00:34:44.765 --rc genhtml_function_coverage=1 00:34:44.765 --rc genhtml_legend=1 00:34:44.765 --rc geninfo_all_blocks=1 00:34:44.765 --rc geninfo_unexecuted_blocks=1 00:34:44.765 00:34:44.765 ' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.765 --rc genhtml_branch_coverage=1 00:34:44.765 --rc genhtml_function_coverage=1 00:34:44.765 --rc genhtml_legend=1 00:34:44.765 --rc geninfo_all_blocks=1 00:34:44.765 --rc geninfo_unexecuted_blocks=1 00:34:44.765 00:34:44.765 ' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.765 --rc genhtml_branch_coverage=1 00:34:44.765 --rc genhtml_function_coverage=1 00:34:44.765 --rc genhtml_legend=1 00:34:44.765 --rc geninfo_all_blocks=1 00:34:44.765 --rc geninfo_unexecuted_blocks=1 00:34:44.765 00:34:44.765 ' 00:34:44.765 17:31:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:34:44.765 17:31:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60783 00:34:44.765 17:31:45 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:34:44.765 17:31:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60783 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60783 ']' 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.765 17:31:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:45.023 [2024-11-26 17:31:45.514779] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:45.023 [2024-11-26 17:31:45.515161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60783 ] 00:34:45.023 [2024-11-26 17:31:45.700428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.281 [2024-11-26 17:31:45.820574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.217 17:31:46 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.217 17:31:46 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:34:46.217 17:31:46 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:34:46.475 { 00:34:46.475 "version": "SPDK v25.01-pre git sha1 c86e5b182", 00:34:46.475 "fields": { 00:34:46.475 "major": 25, 00:34:46.475 "minor": 1, 00:34:46.475 "patch": 0, 00:34:46.475 "suffix": "-pre", 00:34:46.475 "commit": "c86e5b182" 00:34:46.475 } 00:34:46.475 } 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:34:46.475 17:31:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:46.475 17:31:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:46.476 17:31:46 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:34:46.735 request: 00:34:46.735 { 00:34:46.735 "method": "env_dpdk_get_mem_stats", 00:34:46.735 "req_id": 1 00:34:46.735 } 00:34:46.735 Got JSON-RPC error response 00:34:46.735 response: 00:34:46.735 { 00:34:46.735 "code": -32601, 00:34:46.735 "message": "Method not found" 00:34:46.735 } 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.735 17:31:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60783 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60783 ']' 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60783 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60783 00:34:46.735 killing process with pid 60783 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60783' 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 60783 00:34:46.735 17:31:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 60783 00:34:49.270 ************************************ 00:34:49.270 END TEST app_cmdline 00:34:49.270 ************************************ 00:34:49.270 00:34:49.270 real 0m4.464s 00:34:49.270 user 0m4.608s 00:34:49.270 sys 0m0.657s 00:34:49.270 17:31:49 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.270 17:31:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:34:49.270 17:31:49 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:49.270 17:31:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:49.270 17:31:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.270 17:31:49 -- common/autotest_common.sh@10 -- # set +x 00:34:49.270 ************************************ 00:34:49.270 START TEST version 00:34:49.270 ************************************ 00:34:49.270 17:31:49 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:34:49.271 * Looking for test storage... 00:34:49.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.271 17:31:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.271 17:31:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.271 17:31:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.271 17:31:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.271 17:31:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.271 17:31:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.271 17:31:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.271 17:31:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.271 17:31:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.271 17:31:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.271 17:31:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.271 17:31:49 version -- scripts/common.sh@344 -- # case "$op" in 00:34:49.271 17:31:49 version -- scripts/common.sh@345 -- # : 1 00:34:49.271 17:31:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.271 17:31:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.271 17:31:49 version -- scripts/common.sh@365 -- # decimal 1 00:34:49.271 17:31:49 version -- scripts/common.sh@353 -- # local d=1 00:34:49.271 17:31:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.271 17:31:49 version -- scripts/common.sh@355 -- # echo 1 00:34:49.271 17:31:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.271 17:31:49 version -- scripts/common.sh@366 -- # decimal 2 00:34:49.271 17:31:49 version -- scripts/common.sh@353 -- # local d=2 00:34:49.271 17:31:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.271 17:31:49 version -- scripts/common.sh@355 -- # echo 2 00:34:49.271 17:31:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.271 17:31:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.271 17:31:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.271 17:31:49 version -- scripts/common.sh@368 -- # return 0 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.271 --rc genhtml_branch_coverage=1 00:34:49.271 --rc genhtml_function_coverage=1 00:34:49.271 --rc genhtml_legend=1 00:34:49.271 --rc geninfo_all_blocks=1 00:34:49.271 --rc geninfo_unexecuted_blocks=1 00:34:49.271 00:34:49.271 ' 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.271 --rc genhtml_branch_coverage=1 00:34:49.271 --rc genhtml_function_coverage=1 00:34:49.271 --rc genhtml_legend=1 00:34:49.271 --rc geninfo_all_blocks=1 00:34:49.271 --rc geninfo_unexecuted_blocks=1 00:34:49.271 00:34:49.271 ' 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.271 --rc genhtml_branch_coverage=1 00:34:49.271 --rc genhtml_function_coverage=1 00:34:49.271 --rc genhtml_legend=1 00:34:49.271 --rc geninfo_all_blocks=1 00:34:49.271 --rc geninfo_unexecuted_blocks=1 00:34:49.271 00:34:49.271 ' 00:34:49.271 17:31:49 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.271 --rc genhtml_branch_coverage=1 00:34:49.271 --rc genhtml_function_coverage=1 00:34:49.271 --rc genhtml_legend=1 00:34:49.271 --rc geninfo_all_blocks=1 00:34:49.271 --rc geninfo_unexecuted_blocks=1 00:34:49.271 00:34:49.271 ' 00:34:49.271 17:31:49 version -- app/version.sh@17 -- # get_header_version major 00:34:49.271 17:31:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # cut -f2 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # tr -d '"' 00:34:49.271 17:31:49 version -- app/version.sh@17 -- # major=25 00:34:49.271 17:31:49 version -- app/version.sh@18 -- # get_header_version minor 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # cut -f2 00:34:49.271 17:31:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # tr -d '"' 00:34:49.271 17:31:49 version -- app/version.sh@18 -- # minor=1 00:34:49.271 17:31:49 version -- app/version.sh@19 -- # get_header_version patch 00:34:49.271 17:31:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # tr -d '"' 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # cut -f2 00:34:49.271 17:31:49 version -- app/version.sh@19 -- # patch=0 00:34:49.271 17:31:49 version -- app/version.sh@20 -- # get_header_version suffix 00:34:49.271 17:31:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # cut -f2 00:34:49.271 17:31:49 version -- app/version.sh@14 -- # tr -d '"' 00:34:49.529 17:31:49 version -- app/version.sh@20 -- # suffix=-pre 00:34:49.529 17:31:49 version -- app/version.sh@22 -- # version=25.1 00:34:49.529 17:31:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:34:49.529 17:31:49 version -- app/version.sh@28 -- # version=25.1rc0 00:34:49.529 17:31:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:34:49.529 17:31:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:34:49.529 17:31:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:34:49.529 17:31:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:34:49.529 ************************************ 00:34:49.529 END TEST version 00:34:49.529 ************************************ 00:34:49.529 00:34:49.529 real 0m0.305s 00:34:49.529 user 0m0.183s 00:34:49.529 sys 0m0.168s 00:34:49.529 17:31:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.529 17:31:50 version -- common/autotest_common.sh@10 -- # set +x 00:34:49.529 17:31:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:34:49.529 17:31:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:34:49.529 17:31:50 -- spdk/autotest.sh@194 -- # uname -s 00:34:49.529 17:31:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:34:49.529 17:31:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:49.529 17:31:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:49.529 17:31:50 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:34:49.529 17:31:50 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:49.529 17:31:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:49.529 17:31:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:49.529 17:31:50 -- common/autotest_common.sh@10 -- # set +x 00:34:49.529 ************************************ 00:34:49.529 START TEST blockdev_nvme 00:34:49.529 ************************************ 00:34:49.529 17:31:50 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:34:49.529 * Looking for test storage... 00:34:49.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:49.529 17:31:50 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:49.529 17:31:50 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:34:49.529 17:31:50 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.788 17:31:50 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.788 --rc genhtml_branch_coverage=1 00:34:49.788 --rc genhtml_function_coverage=1 00:34:49.788 --rc genhtml_legend=1 00:34:49.788 --rc geninfo_all_blocks=1 00:34:49.788 --rc geninfo_unexecuted_blocks=1 00:34:49.788 00:34:49.788 ' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.788 --rc genhtml_branch_coverage=1 00:34:49.788 --rc genhtml_function_coverage=1 00:34:49.788 --rc genhtml_legend=1 00:34:49.788 --rc geninfo_all_blocks=1 00:34:49.788 --rc geninfo_unexecuted_blocks=1 00:34:49.788 00:34:49.788 ' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.788 --rc genhtml_branch_coverage=1 00:34:49.788 --rc genhtml_function_coverage=1 00:34:49.788 --rc genhtml_legend=1 00:34:49.788 --rc geninfo_all_blocks=1 00:34:49.788 --rc geninfo_unexecuted_blocks=1 00:34:49.788 00:34:49.788 ' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:49.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.788 --rc genhtml_branch_coverage=1 00:34:49.788 --rc genhtml_function_coverage=1 00:34:49.788 --rc genhtml_legend=1 00:34:49.788 --rc geninfo_all_blocks=1 00:34:49.788 --rc geninfo_unexecuted_blocks=1 00:34:49.788 00:34:49.788 ' 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:49.788 17:31:50 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60979 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60979 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60979 ']' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:49.788 17:31:50 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.788 17:31:50 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:49.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:49.789 17:31:50 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.789 17:31:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:49.789 [2024-11-26 17:31:50.389251] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:49.789 [2024-11-26 17:31:50.389582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:34:50.047 [2024-11-26 17:31:50.571850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.047 [2024-11-26 17:31:50.689277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.982 17:31:51 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.982 17:31:51 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:34:50.982 17:31:51 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:34:50.982 17:31:51 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:34:50.982 17:31:51 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:34:50.982 17:31:51 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:34:50.982 17:31:51 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:51.240 17:31:51 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:34:51.240 17:31:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.240 17:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:51 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:51 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:34:51.499 17:31:51 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:51 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.499 17:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:52 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:52 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:34:51.499 17:31:52 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:34:51.499 17:31:52 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.499 17:31:52 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.499 17:31:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "09c0b585-6581-4b5b-91c6-d7a57a818ed2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "09c0b585-6581-4b5b-91c6-d7a57a818ed2",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b0795aa3-bc5e-4b30-a6c4-1bc7e0a87164"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b0795aa3-bc5e-4b30-a6c4-1bc7e0a87164",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "16106cd5-eae4-4ff2-ad2d-758889edc170"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "16106cd5-eae4-4ff2-ad2d-758889edc170",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2631c3fe-125b-4a73-aa3c-b8f4c5323373"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2631c3fe-125b-4a73-aa3c-b8f4c5323373",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "690003f3-5e53-4c7b-b04a-7460187ddf76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "690003f3-5e53-4c7b-b04a-7460187ddf76",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bc1e4205-1047-420a-97a1-6e0c46300ea6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bc1e4205-1047-420a-97a1-6e0c46300ea6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:34:51.759 17:31:52 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60979 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60979 ']' 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60979 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60979 00:34:51.759 killing process with pid 60979 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60979' 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60979 00:34:51.759 17:31:52 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60979 00:34:54.299 17:31:54 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:54.299 17:31:54 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:54.299 17:31:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:54.299 17:31:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.299 17:31:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:54.299 ************************************ 00:34:54.299 START TEST bdev_hello_world 00:34:54.299 ************************************ 00:34:54.299 17:31:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:34:54.299 [2024-11-26 17:31:54.797258] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:54.299 [2024-11-26 17:31:54.797387] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:34:54.299 [2024-11-26 17:31:54.978679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.565 [2024-11-26 17:31:55.095555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.135 [2024-11-26 17:31:55.776679] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:55.135 [2024-11-26 17:31:55.776735] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:34:55.135 [2024-11-26 17:31:55.776758] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:55.135 [2024-11-26 17:31:55.779807] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:55.135 [2024-11-26 17:31:55.780635] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:55.135 [2024-11-26 17:31:55.780675] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:55.135 [2024-11-26 17:31:55.780909] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:55.135 00:34:55.135 [2024-11-26 17:31:55.780933] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:56.515 00:34:56.515 real 0m2.217s 00:34:56.515 user 0m1.838s 00:34:56.515 sys 0m0.268s 00:34:56.515 17:31:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.515 17:31:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:56.515 ************************************ 00:34:56.515 END TEST bdev_hello_world 00:34:56.515 ************************************ 00:34:56.515 17:31:56 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:34:56.515 17:31:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:56.515 17:31:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.515 17:31:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:56.515 ************************************ 00:34:56.515 START TEST bdev_bounds 00:34:56.515 ************************************ 00:34:56.515 17:31:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:34:56.515 Process bdevio pid: 61117 00:34:56.515 17:31:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61117 00:34:56.515 17:31:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:56.515 17:31:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61117' 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61117 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61117 ']' 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.515 17:31:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:56.515 [2024-11-26 17:31:57.088726] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:56.515 [2024-11-26 17:31:57.088856] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:34:56.775 [2024-11-26 17:31:57.269992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:56.775 [2024-11-26 17:31:57.393003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.775 [2024-11-26 17:31:57.393057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.775 [2024-11-26 17:31:57.393091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.715 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.715 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:34:57.715 17:31:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:57.715 I/O targets: 00:34:57.715 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:34:57.715 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:34:57.715 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:57.715 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:57.715 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:57.715 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:34:57.715 00:34:57.715 00:34:57.715 CUnit - A unit testing framework for C - Version 2.1-3 00:34:57.715 http://cunit.sourceforge.net/ 00:34:57.715 00:34:57.715 00:34:57.715 Suite: bdevio tests on: Nvme3n1 00:34:57.715 Test: blockdev write read block ...passed 00:34:57.715 Test: blockdev write zeroes read block ...passed 00:34:57.715 Test: blockdev write zeroes read no split ...passed 00:34:57.715 Test: blockdev write zeroes read split ...passed 00:34:57.715 Test: blockdev write zeroes read split partial ...passed 00:34:57.715 Test: blockdev reset ...[2024-11-26 17:31:58.255552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:34:57.715 [2024-11-26 17:31:58.259590] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:34:57.715 Test: blockdev write read 8 blocks ...uccessful. 00:34:57.715 passed 00:34:57.715 Test: blockdev write read size > 128k ...passed 00:34:57.715 Test: blockdev write read invalid size ...passed 00:34:57.715 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.715 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.715 Test: blockdev write read max offset ...passed 00:34:57.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.715 Test: blockdev writev readv 8 blocks ...passed 00:34:57.715 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.715 Test: blockdev writev readv block ...passed 00:34:57.715 Test: blockdev writev readv size > 128k ...passed 00:34:57.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.715 Test: blockdev comparev and writev ...[2024-11-26 17:31:58.269256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf20a000 len:0x1000 00:34:57.715 [2024-11-26 17:31:58.269304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:57.715 passed 00:34:57.715 Test: blockdev nvme passthru rw ...passed 00:34:57.715 Test: blockdev nvme passthru vendor specific ...passed 00:34:57.715 Test: blockdev nvme admin passthru ...[2024-11-26 17:31:58.270204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:57.715 [2024-11-26 17:31:58.270245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:57.715 passed 00:34:57.715 Test: blockdev copy ...passed 00:34:57.715 Suite: bdevio tests on: Nvme2n3 00:34:57.715 Test: blockdev write read block ...passed 00:34:57.715 Test: blockdev write zeroes read block ...passed 00:34:57.715 Test: blockdev write zeroes read no split ...passed 00:34:57.715 Test: blockdev write zeroes read split ...passed 00:34:57.715 Test: blockdev write zeroes read split partial ...passed 00:34:57.715 Test: blockdev reset ...[2024-11-26 17:31:58.348122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:34:57.715 [2024-11-26 17:31:58.352548] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:34:57.715 Test: blockdev write read 8 blocks ...uccessful. 00:34:57.715 passed 00:34:57.715 Test: blockdev write read size > 128k ...passed 00:34:57.715 Test: blockdev write read invalid size ...passed 00:34:57.715 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.715 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.715 Test: blockdev write read max offset ...passed 00:34:57.715 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.715 Test: blockdev writev readv 8 blocks ...passed 00:34:57.715 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.715 Test: blockdev writev readv block ...passed 00:34:57.715 Test: blockdev writev readv size > 128k ...passed 00:34:57.715 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.715 Test: blockdev comparev and writev ...[2024-11-26 17:31:58.362397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a1c06000 len:0x1000 00:34:57.715 [2024-11-26 17:31:58.362451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:57.715 passed 00:34:57.715 Test: blockdev nvme passthru rw ...passed 00:34:57.715 Test: blockdev nvme passthru vendor specific ...passed 00:34:57.715 Test: blockdev nvme admin passthru ...[2024-11-26 17:31:58.363306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:57.715 [2024-11-26 17:31:58.363344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:57.715 passed 00:34:57.715 Test: blockdev copy ...passed 00:34:57.715 Suite: bdevio tests on: Nvme2n2 00:34:57.715 Test: blockdev write read block ...passed 00:34:57.715 Test: blockdev write zeroes read block ...passed 00:34:57.715 Test: blockdev write zeroes read no split ...passed 00:34:57.974 Test: blockdev write zeroes read split ...passed 00:34:57.974 Test: blockdev write zeroes read split partial ...passed 00:34:57.975 Test: blockdev reset ...[2024-11-26 17:31:58.442487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:34:57.975 [2024-11-26 17:31:58.446929] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:34:57.975 Test: blockdev write read 8 blocks ...uccessful. 00:34:57.975 passed 00:34:57.975 Test: blockdev write read size > 128k ...passed 00:34:57.975 Test: blockdev write read invalid size ...passed 00:34:57.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.975 Test: blockdev write read max offset ...passed 00:34:57.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.975 Test: blockdev writev readv 8 blocks ...passed 00:34:57.975 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.975 Test: blockdev writev readv block ...passed 00:34:57.975 Test: blockdev writev readv size > 128k ...passed 00:34:57.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.975 Test: blockdev comparev and writev ...[2024-11-26 17:31:58.457872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf23c000 len:0x1000 00:34:57.975 [2024-11-26 17:31:58.458063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev nvme passthru rw ...passed 00:34:57.975 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:31:58.459355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:57.975 [2024-11-26 17:31:58.459463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:34:57.975 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev copy ...passed 00:34:57.975 Suite: bdevio tests on: Nvme2n1 00:34:57.975 Test: blockdev write read block ...passed 00:34:57.975 Test: blockdev write zeroes read block ...passed 00:34:57.975 Test: blockdev write zeroes read no split ...passed 00:34:57.975 Test: blockdev write zeroes read split ...passed 00:34:57.975 Test: blockdev write zeroes read split partial ...passed 00:34:57.975 Test: blockdev reset ...[2024-11-26 17:31:58.538610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:34:57.975 passed 00:34:57.975 Test: blockdev write read 8 blocks ...[2024-11-26 17:31:58.542479] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:34:57.975 passed 00:34:57.975 Test: blockdev write read size > 128k ...passed 00:34:57.975 Test: blockdev write read invalid size ...passed 00:34:57.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.975 Test: blockdev write read max offset ...passed 00:34:57.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.975 Test: blockdev writev readv 8 blocks ...passed 00:34:57.975 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.975 Test: blockdev writev readv block ...passed 00:34:57.975 Test: blockdev writev readv size > 128k ...passed 00:34:57.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.975 Test: blockdev comparev and writev ...[2024-11-26 17:31:58.551491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf238000 len:0x1000 00:34:57.975 [2024-11-26 17:31:58.551558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev nvme passthru rw ...passed 00:34:57.975 Test: blockdev nvme passthru vendor specific ...passed 00:34:57.975 Test: blockdev nvme admin passthru ...[2024-11-26 17:31:58.552430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:57.975 [2024-11-26 17:31:58.552469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev copy ...passed 00:34:57.975 Suite: bdevio tests on: Nvme1n1 00:34:57.975 Test: blockdev write read block ...passed 00:34:57.975 Test: blockdev write zeroes read block ...passed 00:34:57.975 Test: blockdev write zeroes read no split ...passed 00:34:57.975 Test: blockdev write zeroes read split ...passed 00:34:57.975 Test: blockdev write zeroes read split partial ...passed 00:34:57.975 Test: blockdev reset ...[2024-11-26 17:31:58.631814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:34:57.975 [2024-11-26 17:31:58.635579] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:34:57.975 Test: blockdev write read 8 blocks ...uccessful. 00:34:57.975 passed 00:34:57.975 Test: blockdev write read size > 128k ...passed 00:34:57.975 Test: blockdev write read invalid size ...passed 00:34:57.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.975 Test: blockdev write read max offset ...passed 00:34:57.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.975 Test: blockdev writev readv 8 blocks ...passed 00:34:57.975 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.975 Test: blockdev writev readv block ...passed 00:34:57.975 Test: blockdev writev readv size > 128k ...passed 00:34:57.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.975 Test: blockdev comparev and writev ...[2024-11-26 17:31:58.645946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cf234000 len:0x1000 00:34:57.975 [2024-11-26 17:31:58.646121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev nvme passthru rw ...passed 00:34:57.975 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:31:58.647724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:34:57.975 [2024-11-26 17:31:58.647880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:34:57.975 passed 00:34:57.975 Test: blockdev nvme admin passthru ...passed 00:34:57.975 Test: blockdev copy ...passed 00:34:57.975 Suite: bdevio tests on: Nvme0n1 00:34:57.975 Test: blockdev write read block ...passed 00:34:57.975 Test: blockdev write zeroes read block ...passed 00:34:57.975 Test: blockdev write zeroes read no split ...passed 00:34:58.235 Test: blockdev write zeroes read split ...passed 00:34:58.235 Test: blockdev write zeroes read split partial ...passed 00:34:58.235 Test: blockdev reset ...[2024-11-26 17:31:58.728888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:34:58.235 passed 00:34:58.235 Test: blockdev write read 8 blocks ...[2024-11-26 17:31:58.732873] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:34:58.235 passed 00:34:58.235 Test: blockdev write read size > 128k ...passed 00:34:58.235 Test: blockdev write read invalid size ...passed 00:34:58.235 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:58.235 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:58.235 Test: blockdev write read max offset ...passed 00:34:58.235 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:58.235 Test: blockdev writev readv 8 blocks ...passed 00:34:58.235 Test: blockdev writev readv 30 x 1block ...passed 00:34:58.235 Test: blockdev writev readv block ...passed 00:34:58.235 Test: blockdev writev readv size > 128k ...passed 00:34:58.235 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:58.235 Test: blockdev comparev and writev ...passed 00:34:58.235 Test: blockdev nvme passthru rw ...[2024-11-26 17:31:58.741754] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:34:58.235 separate metadata which is not supported yet. 00:34:58.235 passed 00:34:58.235 Test: blockdev nvme passthru vendor specific ...passed 00:34:58.235 Test: blockdev nvme admin passthru ...[2024-11-26 17:31:58.742383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:34:58.235 [2024-11-26 17:31:58.742437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:34:58.235 passed 00:34:58.235 Test: blockdev copy ...passed 00:34:58.235 00:34:58.235 Run Summary: Type Total Ran Passed Failed Inactive 00:34:58.235 suites 6 6 n/a 0 0 00:34:58.235 tests 138 138 138 0 0 00:34:58.235 asserts 893 893 893 0 n/a 00:34:58.235 00:34:58.235 Elapsed time = 1.515 seconds 00:34:58.235 0 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61117 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61117 ']' 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61117 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61117 00:34:58.235 killing process with pid 61117 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61117' 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61117 00:34:58.235 17:31:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61117 00:34:59.173 ************************************ 00:34:59.173 END TEST bdev_bounds 00:34:59.173 ************************************ 00:34:59.173 17:31:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:59.173 00:34:59.173 real 0m2.860s 00:34:59.173 user 0m7.252s 00:34:59.173 sys 0m0.427s 00:34:59.173 17:31:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:59.173 17:31:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:59.433 17:31:59 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:34:59.433 17:31:59 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:59.433 17:31:59 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:59.433 17:31:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:34:59.433 ************************************ 00:34:59.433 START TEST bdev_nbd 00:34:59.433 ************************************ 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61182 00:34:59.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61182 /var/tmp/spdk-nbd.sock 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61182 ']' 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.433 17:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:59.433 [2024-11-26 17:32:00.034181] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:34:59.433 [2024-11-26 17:32:00.034313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.693 [2024-11-26 17:32:00.214255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.693 [2024-11-26 17:32:00.333896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:00.633 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:00.891 1+0 records in 00:35:00.891 1+0 records out 00:35:00.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607937 s, 6.7 MB/s 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:00.891 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:01.149 1+0 records in 00:35:01.149 1+0 records out 00:35:01.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497754 s, 8.2 MB/s 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:01.149 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:01.408 1+0 records in 00:35:01.408 1+0 records out 00:35:01.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645558 s, 6.3 MB/s 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:01.408 17:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:01.666 1+0 records in 00:35:01.666 1+0 records out 00:35:01.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502747 s, 8.1 MB/s 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:01.666 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:01.925 1+0 records in 00:35:01.925 1+0 records out 00:35:01.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102859 s, 4.0 MB/s 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:01.925 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:01.926 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:02.184 1+0 records in 00:35:02.184 1+0 records out 00:35:02.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880699 s, 4.7 MB/s 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:35:02.184 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:02.441 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd0", 00:35:02.441 "bdev_name": "Nvme0n1" 00:35:02.441 }, 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd1", 00:35:02.441 "bdev_name": "Nvme1n1" 00:35:02.441 }, 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd2", 00:35:02.441 "bdev_name": "Nvme2n1" 00:35:02.441 }, 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd3", 00:35:02.441 "bdev_name": "Nvme2n2" 00:35:02.441 }, 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd4", 00:35:02.441 "bdev_name": "Nvme2n3" 00:35:02.441 }, 00:35:02.441 { 00:35:02.441 "nbd_device": "/dev/nbd5", 00:35:02.441 "bdev_name": "Nvme3n1" 00:35:02.441 } 00:35:02.441 ]' 00:35:02.441 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:02.441 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:02.441 { 00:35:02.442 "nbd_device": "/dev/nbd0", 00:35:02.442 "bdev_name": "Nvme0n1" 00:35:02.442 }, 00:35:02.442 { 00:35:02.442 "nbd_device": "/dev/nbd1", 00:35:02.442 "bdev_name": "Nvme1n1" 00:35:02.442 }, 00:35:02.442 { 00:35:02.442 "nbd_device": "/dev/nbd2", 00:35:02.442 "bdev_name": "Nvme2n1" 00:35:02.442 }, 00:35:02.442 { 00:35:02.442 "nbd_device": "/dev/nbd3", 00:35:02.442 "bdev_name": "Nvme2n2" 00:35:02.442 }, 00:35:02.442 { 00:35:02.442 "nbd_device": "/dev/nbd4", 00:35:02.442 "bdev_name": "Nvme2n3" 00:35:02.442 }, 00:35:02.442 { 00:35:02.442 "nbd_device": "/dev/nbd5", 00:35:02.442 "bdev_name": "Nvme3n1" 00:35:02.442 } 00:35:02.442 ]' 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:02.442 17:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:02.699 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:02.958 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:03.217 17:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:03.477 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:03.737 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:03.996 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:35:04.254 /dev/nbd0 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:04.254 1+0 records in 00:35:04.254 1+0 records out 00:35:04.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600797 s, 6.8 MB/s 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:04.254 17:32:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:35:04.514 /dev/nbd1 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:04.514 1+0 records in 00:35:04.514 1+0 records out 00:35:04.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671018 s, 6.1 MB/s 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:04.514 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:35:04.773 /dev/nbd10 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:04.773 1+0 records in 00:35:04.773 1+0 records out 00:35:04.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067069 s, 6.1 MB/s 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:04.773 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:35:05.032 /dev/nbd11 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:05.032 1+0 records in 00:35:05.032 1+0 records out 00:35:05.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786921 s, 5.2 MB/s 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:05.032 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:35:05.291 /dev/nbd12 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:05.291 1+0 records in 00:35:05.291 1+0 records out 00:35:05.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755649 s, 5.4 MB/s 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:05.291 17:32:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:35:05.550 /dev/nbd13 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:05.550 1+0 records in 00:35:05.550 1+0 records out 00:35:05.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000855577 s, 4.8 MB/s 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:05.550 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd0", 00:35:05.810 "bdev_name": "Nvme0n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd1", 00:35:05.810 "bdev_name": "Nvme1n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd10", 00:35:05.810 "bdev_name": "Nvme2n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd11", 00:35:05.810 "bdev_name": "Nvme2n2" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd12", 00:35:05.810 "bdev_name": "Nvme2n3" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd13", 00:35:05.810 "bdev_name": "Nvme3n1" 00:35:05.810 } 00:35:05.810 ]' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd0", 00:35:05.810 "bdev_name": "Nvme0n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd1", 00:35:05.810 "bdev_name": "Nvme1n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd10", 00:35:05.810 "bdev_name": "Nvme2n1" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd11", 00:35:05.810 "bdev_name": "Nvme2n2" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd12", 00:35:05.810 "bdev_name": "Nvme2n3" 00:35:05.810 }, 00:35:05.810 { 00:35:05.810 "nbd_device": "/dev/nbd13", 00:35:05.810 "bdev_name": "Nvme3n1" 00:35:05.810 } 00:35:05.810 ]' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:05.810 /dev/nbd1 00:35:05.810 /dev/nbd10 00:35:05.810 /dev/nbd11 00:35:05.810 /dev/nbd12 00:35:05.810 /dev/nbd13' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:05.810 /dev/nbd1 00:35:05.810 /dev/nbd10 00:35:05.810 /dev/nbd11 00:35:05.810 /dev/nbd12 00:35:05.810 /dev/nbd13' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:05.810 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:05.811 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:05.811 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:05.811 256+0 records in 00:35:05.811 256+0 records out 00:35:05.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116035 s, 90.4 MB/s 00:35:05.811 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:05.811 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:06.070 256+0 records in 00:35:06.070 256+0 records out 00:35:06.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12615 s, 8.3 MB/s 00:35:06.070 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:06.070 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:35:06.070 256+0 records in 00:35:06.070 256+0 records out 00:35:06.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124909 s, 8.4 MB/s 00:35:06.070 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:06.070 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:35:06.328 256+0 records in 00:35:06.328 256+0 records out 00:35:06.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125582 s, 8.3 MB/s 00:35:06.328 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:06.328 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:35:06.328 256+0 records in 00:35:06.328 256+0 records out 00:35:06.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129305 s, 8.1 MB/s 00:35:06.328 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:06.328 17:32:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:35:06.586 256+0 records in 00:35:06.587 256+0 records out 00:35:06.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12214 s, 8.6 MB/s 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:35:06.587 256+0 records in 00:35:06.587 256+0 records out 00:35:06.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130542 s, 8.0 MB/s 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:35:06.587 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:35:06.844 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:06.845 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.102 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.359 17:32:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.619 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:07.964 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:35:08.223 17:32:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:35:08.481 malloc_lvol_verify 00:35:08.481 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:35:08.740 82ff804a-fa56-4ccc-b138-c008eea6c21f 00:35:08.740 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:35:08.998 28302c52-7f8f-4bd6-82f2-4a432b702aa6 00:35:08.998 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:35:09.257 /dev/nbd0 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:35:09.257 mke2fs 1.47.0 (5-Feb-2023) 00:35:09.257 Discarding device blocks: 0/4096 done 00:35:09.257 Creating filesystem with 4096 1k blocks and 1024 inodes 00:35:09.257 00:35:09.257 Allocating group tables: 0/1 done 00:35:09.257 Writing inode tables: 0/1 done 00:35:09.257 Creating journal (1024 blocks): done 00:35:09.257 Writing superblocks and filesystem accounting information: 0/1 done 00:35:09.257 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:09.257 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61182 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61182 ']' 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61182 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.518 17:32:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61182 00:35:09.518 killing process with pid 61182 00:35:09.518 17:32:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:09.518 17:32:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:09.518 17:32:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61182' 00:35:09.518 17:32:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61182 00:35:09.518 17:32:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61182 00:35:10.896 ************************************ 00:35:10.896 END TEST bdev_nbd 00:35:10.896 ************************************ 00:35:10.896 17:32:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:35:10.896 00:35:10.896 real 0m11.303s 00:35:10.896 user 0m14.672s 00:35:10.896 sys 0m4.659s 00:35:10.896 17:32:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.896 17:32:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:10.896 17:32:11 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:35:10.896 17:32:11 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:35:10.896 skipping fio tests on NVMe due to multi-ns failures. 00:35:10.896 17:32:11 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:35:10.896 17:32:11 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:10.896 17:32:11 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:10.896 17:32:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:35:10.896 17:32:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.896 17:32:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:10.896 ************************************ 00:35:10.896 START TEST bdev_verify 00:35:10.896 ************************************ 00:35:10.896 17:32:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:35:10.896 [2024-11-26 17:32:11.401543] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:10.896 [2024-11-26 17:32:11.401671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61569 ] 00:35:10.896 [2024-11-26 17:32:11.586205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:11.156 [2024-11-26 17:32:11.706307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:11.156 [2024-11-26 17:32:11.706352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.724 Running I/O for 5 seconds... 00:35:14.037 19776.00 IOPS, 77.25 MiB/s [2024-11-26T17:32:15.667Z] 20192.00 IOPS, 78.88 MiB/s [2024-11-26T17:32:16.624Z] 20586.67 IOPS, 80.42 MiB/s [2024-11-26T17:32:17.560Z] 20592.00 IOPS, 80.44 MiB/s [2024-11-26T17:32:17.560Z] 20454.40 IOPS, 79.90 MiB/s 00:35:16.866 Latency(us) 00:35:16.866 [2024-11-26T17:32:17.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.866 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0xbd0bd 00:35:16.866 Nvme0n1 : 5.05 1697.91 6.63 0.00 0.00 75234.56 15686.53 67799.49 00:35:16.866 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:35:16.866 Nvme0n1 : 5.06 1694.35 6.62 0.00 0.00 74851.99 15791.81 68641.72 00:35:16.866 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0xa0000 00:35:16.866 Nvme1n1 : 5.05 1697.46 6.63 0.00 0.00 75174.66 15897.09 63167.23 00:35:16.866 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0xa0000 length 0xa0000 00:35:16.866 Nvme1n1 : 5.06 1693.66 6.62 0.00 0.00 74777.44 11685.94 66115.03 00:35:16.866 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0x80000 00:35:16.866 Nvme2n1 : 5.05 1696.57 6.63 0.00 0.00 75063.46 16107.64 60219.42 00:35:16.866 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x80000 length 0x80000 00:35:16.866 Nvme2n1 : 5.07 1704.06 6.66 0.00 0.00 74311.26 2645.13 68220.61 00:35:16.866 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0x80000 00:35:16.866 Nvme2n2 : 5.06 1695.77 6.62 0.00 0.00 74994.38 16844.59 59798.31 00:35:16.866 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x80000 length 0x80000 00:35:16.866 Nvme2n2 : 5.05 1696.65 6.63 0.00 0.00 75265.37 17792.10 67378.38 00:35:16.866 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0x80000 00:35:16.866 Nvme2n3 : 5.06 1695.01 6.62 0.00 0.00 74900.47 17160.43 62746.11 00:35:16.866 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x80000 length 0x80000 00:35:16.866 Nvme2n3 : 5.06 1695.85 6.62 0.00 0.00 75069.60 19160.73 66536.15 00:35:16.866 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x0 length 0x20000 00:35:16.866 Nvme3n1 : 5.06 1694.26 6.62 0.00 0.00 74808.81 15370.69 64430.57 00:35:16.866 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:16.866 Verification LBA range: start 0x20000 length 0x20000 00:35:16.866 Nvme3n1 : 5.06 1695.10 6.62 0.00 0.00 74951.76 19687.12 68641.72 00:35:16.866 [2024-11-26T17:32:17.560Z] =================================================================================================================== 00:35:16.866 [2024-11-26T17:32:17.560Z] Total : 20356.65 79.52 0.00 0.00 74949.92 2645.13 68641.72 00:35:18.769 00:35:18.769 real 0m7.636s 00:35:18.769 user 0m14.121s 00:35:18.769 sys 0m0.294s 00:35:18.769 17:32:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.769 17:32:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:35:18.769 ************************************ 00:35:18.769 END TEST bdev_verify 00:35:18.769 ************************************ 00:35:18.769 17:32:18 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:18.769 17:32:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:35:18.769 17:32:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:18.769 17:32:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:18.769 ************************************ 00:35:18.769 START TEST bdev_verify_big_io 00:35:18.769 ************************************ 00:35:18.769 17:32:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:35:18.769 [2024-11-26 17:32:19.111649] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:18.769 [2024-11-26 17:32:19.112489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61673 ] 00:35:18.769 [2024-11-26 17:32:19.307129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:18.769 [2024-11-26 17:32:19.426167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.769 [2024-11-26 17:32:19.426205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.706 Running I/O for 5 seconds... 00:35:24.396 2107.00 IOPS, 131.69 MiB/s [2024-11-26T17:32:26.464Z] 3074.00 IOPS, 192.12 MiB/s [2024-11-26T17:32:26.464Z] 3860.67 IOPS, 241.29 MiB/s 00:35:25.770 Latency(us) 00:35:25.770 [2024-11-26T17:32:26.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.770 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0xbd0b 00:35:25.770 Nvme0n1 : 5.35 167.56 10.47 0.00 0.00 739084.90 30530.83 842229.72 00:35:25.770 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0xbd0b length 0xbd0b 00:35:25.770 Nvme0n1 : 5.51 162.60 10.16 0.00 0.00 765465.91 21371.58 828754.04 00:35:25.770 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0xa000 00:35:25.770 Nvme1n1 : 5.56 172.67 10.79 0.00 0.00 700557.14 53060.47 700735.13 00:35:25.770 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0xa000 length 0xa000 00:35:25.770 Nvme1n1 : 5.51 162.53 10.16 0.00 0.00 746006.15 82117.40 710841.88 00:35:25.770 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0x8000 00:35:25.770 Nvme2n1 : 5.68 176.47 11.03 0.00 0.00 668221.01 45901.52 646832.42 00:35:25.770 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x8000 length 0x8000 00:35:25.770 Nvme2n1 : 5.61 163.73 10.23 0.00 0.00 719083.64 97698.65 680521.61 00:35:25.770 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0x8000 00:35:25.770 Nvme2n2 : 5.69 180.00 11.25 0.00 0.00 642222.37 78327.36 660308.10 00:35:25.770 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x8000 length 0x8000 00:35:25.770 Nvme2n2 : 5.74 175.35 10.96 0.00 0.00 660863.59 44848.73 744531.07 00:35:25.770 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0x8000 00:35:25.770 Nvme2n3 : 5.77 188.46 11.78 0.00 0.00 600419.65 19055.45 677152.69 00:35:25.770 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x8000 length 0x8000 00:35:25.770 Nvme2n3 : 5.74 174.79 10.92 0.00 0.00 644170.86 45690.96 690628.37 00:35:25.770 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x0 length 0x2000 00:35:25.770 Nvme3n1 : 5.79 199.05 12.44 0.00 0.00 558434.32 1618.66 693997.29 00:35:25.770 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:35:25.770 Verification LBA range: start 0x2000 length 0x2000 00:35:25.770 Nvme3n1 : 5.76 187.61 11.73 0.00 0.00 589684.43 4526.98 1495799.98 00:35:25.770 [2024-11-26T17:32:26.464Z] =================================================================================================================== 00:35:25.770 [2024-11-26T17:32:26.464Z] Total : 2110.80 131.93 0.00 0.00 664431.99 1618.66 1495799.98 00:35:27.670 00:35:27.670 real 0m8.963s 00:35:27.670 user 0m16.661s 00:35:27.670 sys 0m0.368s 00:35:27.670 17:32:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.670 17:32:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:35:27.670 ************************************ 00:35:27.670 END TEST bdev_verify_big_io 00:35:27.670 ************************************ 00:35:27.670 17:32:28 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:27.670 17:32:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:27.670 17:32:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.670 17:32:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:27.670 ************************************ 00:35:27.670 START TEST bdev_write_zeroes 00:35:27.670 ************************************ 00:35:27.670 17:32:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:27.670 [2024-11-26 17:32:28.135125] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:27.670 [2024-11-26 17:32:28.135253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:35:27.670 [2024-11-26 17:32:28.320164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.928 [2024-11-26 17:32:28.436656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.493 Running I/O for 1 seconds... 00:35:29.864 65664.00 IOPS, 256.50 MiB/s 00:35:29.864 Latency(us) 00:35:29.864 [2024-11-26T17:32:30.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.864 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme0n1 : 1.02 10923.96 42.67 0.00 0.00 11686.16 9106.61 26214.40 00:35:29.864 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme1n1 : 1.02 10912.90 42.63 0.00 0.00 11684.53 9053.97 26530.24 00:35:29.864 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme2n1 : 1.02 10902.31 42.59 0.00 0.00 11656.89 9264.53 26109.12 00:35:29.864 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme2n2 : 1.02 10943.93 42.75 0.00 0.00 11567.70 5869.29 22529.64 00:35:29.864 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme2n3 : 1.02 10934.12 42.71 0.00 0.00 11547.31 6079.85 22319.09 00:35:29.864 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:35:29.864 Nvme3n1 : 1.03 10924.34 42.67 0.00 0.00 11528.47 5948.25 22003.25 00:35:29.864 [2024-11-26T17:32:30.558Z] =================================================================================================================== 00:35:29.864 [2024-11-26T17:32:30.558Z] Total : 65541.55 256.02 0.00 0.00 11611.66 5869.29 26530.24 00:35:30.800 00:35:30.800 real 0m3.334s 00:35:30.800 user 0m2.936s 00:35:30.800 sys 0m0.280s 00:35:30.800 17:32:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.800 17:32:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:35:30.800 ************************************ 00:35:30.800 END TEST bdev_write_zeroes 00:35:30.800 ************************************ 00:35:30.800 17:32:31 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:30.800 17:32:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:30.800 17:32:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:30.800 17:32:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:30.800 ************************************ 00:35:30.800 START TEST bdev_json_nonenclosed 00:35:30.800 ************************************ 00:35:30.800 17:32:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:31.067 [2024-11-26 17:32:31.541001] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:31.067 [2024-11-26 17:32:31.541142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61846 ] 00:35:31.067 [2024-11-26 17:32:31.720237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.325 [2024-11-26 17:32:31.834266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.325 [2024-11-26 17:32:31.834388] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:35:31.325 [2024-11-26 17:32:31.834410] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:31.325 [2024-11-26 17:32:31.834422] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:31.584 00:35:31.584 real 0m0.646s 00:35:31.584 user 0m0.397s 00:35:31.584 sys 0m0.143s 00:35:31.584 17:32:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.584 ************************************ 00:35:31.584 END TEST bdev_json_nonenclosed 00:35:31.584 ************************************ 00:35:31.584 17:32:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:35:31.584 17:32:32 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:31.584 17:32:32 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:35:31.584 17:32:32 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.584 17:32:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:31.584 ************************************ 00:35:31.584 START TEST bdev_json_nonarray 00:35:31.584 ************************************ 00:35:31.584 17:32:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:35:31.584 [2024-11-26 17:32:32.250404] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:31.584 [2024-11-26 17:32:32.250531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61870 ] 00:35:31.843 [2024-11-26 17:32:32.431672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.102 [2024-11-26 17:32:32.545049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.102 [2024-11-26 17:32:32.545362] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:35:32.102 [2024-11-26 17:32:32.545387] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:35:32.102 [2024-11-26 17:32:32.545399] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:32.365 00:35:32.365 real 0m0.637s 00:35:32.365 user 0m0.393s 00:35:32.365 sys 0m0.140s 00:35:32.365 17:32:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.365 ************************************ 00:35:32.365 END TEST bdev_json_nonarray 00:35:32.365 17:32:32 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:35:32.365 ************************************ 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:35:32.365 17:32:32 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:35:32.365 00:35:32.365 real 0m42.797s 00:35:32.365 user 1m3.024s 00:35:32.365 sys 0m7.749s 00:35:32.365 17:32:32 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.365 17:32:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:35:32.365 ************************************ 00:35:32.365 END TEST blockdev_nvme 00:35:32.365 ************************************ 00:35:32.365 17:32:32 -- spdk/autotest.sh@209 -- # uname -s 00:35:32.365 17:32:32 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:35:32.365 17:32:32 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:35:32.365 17:32:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:32.365 17:32:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.365 17:32:32 -- common/autotest_common.sh@10 -- # set +x 00:35:32.365 ************************************ 00:35:32.365 START TEST blockdev_nvme_gpt 00:35:32.365 ************************************ 00:35:32.365 17:32:32 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:35:32.625 * Looking for test storage... 00:35:32.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.625 17:32:33 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:32.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.625 --rc genhtml_branch_coverage=1 00:35:32.625 --rc genhtml_function_coverage=1 00:35:32.625 --rc genhtml_legend=1 00:35:32.625 --rc geninfo_all_blocks=1 00:35:32.625 --rc geninfo_unexecuted_blocks=1 00:35:32.625 00:35:32.625 ' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:32.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.625 --rc genhtml_branch_coverage=1 00:35:32.625 --rc genhtml_function_coverage=1 00:35:32.625 --rc genhtml_legend=1 00:35:32.625 --rc geninfo_all_blocks=1 00:35:32.625 --rc geninfo_unexecuted_blocks=1 00:35:32.625 00:35:32.625 ' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:32.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.625 --rc genhtml_branch_coverage=1 00:35:32.625 --rc genhtml_function_coverage=1 00:35:32.625 --rc genhtml_legend=1 00:35:32.625 --rc geninfo_all_blocks=1 00:35:32.625 --rc geninfo_unexecuted_blocks=1 00:35:32.625 00:35:32.625 ' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:32.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.625 --rc genhtml_branch_coverage=1 00:35:32.625 --rc genhtml_function_coverage=1 00:35:32.625 --rc genhtml_legend=1 00:35:32.625 --rc geninfo_all_blocks=1 00:35:32.625 --rc geninfo_unexecuted_blocks=1 00:35:32.625 00:35:32.625 ' 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:35:32.625 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61950 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:32.626 17:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61950 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61950 ']' 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.626 17:32:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:32.626 [2024-11-26 17:32:33.302073] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:32.626 [2024-11-26 17:32:33.302196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:35:32.885 [2024-11-26 17:32:33.471170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.144 [2024-11-26 17:32:33.593733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.078 17:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.079 17:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:35:34.079 17:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:35:34.079 17:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:35:34.079 17:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:34.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:34.907 Waiting for block devices as requested 00:35:34.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:34.907 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:34.907 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:35.166 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:40.473 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:35:40.473 BYT; 00:35:40.473 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:35:40.473 BYT; 00:35:40.473 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:40.473 17:32:40 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:35:40.473 17:32:40 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:35:41.409 The operation has completed successfully. 00:35:41.409 17:32:41 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:35:42.344 The operation has completed successfully. 00:35:42.344 17:32:42 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:43.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:43.846 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:43.846 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:43.846 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:44.104 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:35:44.104 17:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.104 17:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.104 [] 00:35:44.104 17:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:44.104 17:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:35:44.104 17:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.104 17:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.363 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:44.622 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:35:44.622 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:35:44.623 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8861e264-2a7c-43e2-86b4-93030e0d22ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8861e264-2a7c-43e2-86b4-93030e0d22ce",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c25e3431-c0ca-420a-acf8-bb0252779427"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c25e3431-c0ca-420a-acf8-bb0252779427",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "33f76e77-18e4-470c-ab9f-7a607d7ee884"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33f76e77-18e4-470c-ab9f-7a607d7ee884",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "74c5f145-33b1-4ffd-b236-004328b611ff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "74c5f145-33b1-4ffd-b236-004328b611ff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9dae5825-8c6f-4442-9643-2a2a8fe55b10"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9dae5825-8c6f-4442-9643-2a2a8fe55b10",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:35:44.623 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:35:44.623 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:35:44.880 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:35:44.880 17:32:45 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61950 00:35:44.880 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61950 ']' 00:35:44.880 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61950 00:35:44.880 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:35:44.880 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61950 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61950' 00:35:44.881 killing process with pid 61950 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61950 00:35:44.881 17:32:45 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61950 00:35:47.408 17:32:47 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:47.408 17:32:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:35:47.408 17:32:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:47.408 17:32:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:47.408 17:32:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:47.408 ************************************ 00:35:47.408 START TEST bdev_hello_world 00:35:47.408 ************************************ 00:35:47.408 17:32:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:35:47.408 [2024-11-26 17:32:47.860987] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:47.408 [2024-11-26 17:32:47.861118] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62606 ] 00:35:47.408 [2024-11-26 17:32:48.042862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.667 [2024-11-26 17:32:48.158733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.235 [2024-11-26 17:32:48.827720] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:35:48.235 [2024-11-26 17:32:48.827768] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:35:48.235 [2024-11-26 17:32:48.827794] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:35:48.235 [2024-11-26 17:32:48.830688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:35:48.235 [2024-11-26 17:32:48.831493] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:35:48.235 [2024-11-26 17:32:48.831542] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:35:48.235 [2024-11-26 17:32:48.831890] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:35:48.235 00:35:48.235 [2024-11-26 17:32:48.831926] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:35:49.614 00:35:49.614 real 0m2.203s 00:35:49.614 user 0m1.845s 00:35:49.614 sys 0m0.249s 00:35:49.614 17:32:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:49.614 17:32:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:35:49.614 ************************************ 00:35:49.614 END TEST bdev_hello_world 00:35:49.614 ************************************ 00:35:49.614 17:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:35:49.614 17:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:49.614 17:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:49.614 17:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:49.614 ************************************ 00:35:49.614 START TEST bdev_bounds 00:35:49.614 ************************************ 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62648 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:49.614 Process bdevio pid: 62648 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62648' 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62648 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62648 ']' 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.614 17:32:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:49.614 [2024-11-26 17:32:50.130165] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:49.614 [2024-11-26 17:32:50.130295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62648 ] 00:35:49.614 [2024-11-26 17:32:50.304815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:49.873 [2024-11-26 17:32:50.421614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.873 [2024-11-26 17:32:50.422575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.873 [2024-11-26 17:32:50.422588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:50.807 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:50.807 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:35:50.807 17:32:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:50.807 I/O targets: 00:35:50.807 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:35:50.807 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:35:50.807 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:35:50.807 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:50.807 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:50.807 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:35:50.807 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:35:50.807 00:35:50.807 00:35:50.807 CUnit - A unit testing framework for C - Version 2.1-3 00:35:50.807 http://cunit.sourceforge.net/ 00:35:50.807 00:35:50.807 00:35:50.807 Suite: bdevio tests on: Nvme3n1 00:35:50.807 Test: blockdev write read block ...passed 00:35:50.807 Test: blockdev write zeroes read block ...passed 00:35:50.807 Test: blockdev write zeroes read no split ...passed 00:35:50.807 Test: blockdev write zeroes read split ...passed 00:35:50.807 Test: blockdev write zeroes read split partial ...passed 00:35:50.807 Test: blockdev reset ...[2024-11-26 17:32:51.310194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:35:50.807 [2024-11-26 17:32:51.314960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:35:50.807 passed 00:35:50.807 Test: blockdev write read 8 blocks ...passed 00:35:50.807 Test: blockdev write read size > 128k ...passed 00:35:50.807 Test: blockdev write read invalid size ...passed 00:35:50.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:50.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:50.807 Test: blockdev write read max offset ...passed 00:35:50.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:50.807 Test: blockdev writev readv 8 blocks ...passed 00:35:50.807 Test: blockdev writev readv 30 x 1block ...passed 00:35:50.807 Test: blockdev writev readv block ...passed 00:35:50.807 Test: blockdev writev readv size > 128k ...passed 00:35:50.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:50.808 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.324738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bca04000 len:0x1000 00:35:50.808 [2024-11-26 17:32:51.324821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:50.808 passed 00:35:50.808 Test: blockdev nvme passthru rw ...passed 00:35:50.808 Test: blockdev nvme passthru vendor specific ...passed 00:35:50.808 Test: blockdev nvme admin passthru ...[2024-11-26 17:32:51.326031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:50.808 [2024-11-26 17:32:51.326074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:50.808 passed 00:35:50.808 Test: blockdev copy ...passed 00:35:50.808 Suite: bdevio tests on: Nvme2n3 00:35:50.808 Test: blockdev write read block ...passed 00:35:50.808 Test: blockdev write zeroes read block ...passed 00:35:50.808 Test: blockdev write zeroes read no split ...passed 00:35:50.808 Test: blockdev write zeroes read split ...passed 00:35:50.808 Test: blockdev write zeroes read split partial ...passed 00:35:50.808 Test: blockdev reset ...[2024-11-26 17:32:51.403448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:35:50.808 [2024-11-26 17:32:51.408958] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:35:50.808 passed 00:35:50.808 Test: blockdev write read 8 blocks ...passed 00:35:50.808 Test: blockdev write read size > 128k ...passed 00:35:50.808 Test: blockdev write read invalid size ...passed 00:35:50.808 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:50.808 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:50.808 Test: blockdev write read max offset ...passed 00:35:50.808 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:50.808 Test: blockdev writev readv 8 blocks ...passed 00:35:50.808 Test: blockdev writev readv 30 x 1block ...passed 00:35:50.808 Test: blockdev writev readv block ...passed 00:35:50.808 Test: blockdev writev readv size > 128k ...passed 00:35:50.808 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:50.808 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.418851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bca02000 len:0x1000 00:35:50.808 [2024-11-26 17:32:51.418933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:50.808 passed 00:35:50.808 Test: blockdev nvme passthru rw ...passed 00:35:50.808 Test: blockdev nvme passthru vendor specific ...passed 00:35:50.808 Test: blockdev nvme admin passthru ...[2024-11-26 17:32:51.420066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:50.808 [2024-11-26 17:32:51.420108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:50.808 passed 00:35:50.808 Test: blockdev copy ...passed 00:35:50.808 Suite: bdevio tests on: Nvme2n2 00:35:50.808 Test: blockdev write read block ...passed 00:35:50.808 Test: blockdev write zeroes read block ...passed 00:35:50.808 Test: blockdev write zeroes read no split ...passed 00:35:50.808 Test: blockdev write zeroes read split ...passed 00:35:50.808 Test: blockdev write zeroes read split partial ...passed 00:35:50.808 Test: blockdev reset ...[2024-11-26 17:32:51.501071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:35:51.070 [2024-11-26 17:32:51.506281] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:35:51.070 passed 00:35:51.070 Test: blockdev write read 8 blocks ...passed 00:35:51.070 Test: blockdev write read size > 128k ...passed 00:35:51.070 Test: blockdev write read invalid size ...passed 00:35:51.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.070 Test: blockdev write read max offset ...passed 00:35:51.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.070 Test: blockdev writev readv 8 blocks ...passed 00:35:51.070 Test: blockdev writev readv 30 x 1block ...passed 00:35:51.070 Test: blockdev writev readv block ...passed 00:35:51.070 Test: blockdev writev readv size > 128k ...passed 00:35:51.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:51.070 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.515525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0838000 len:0x1000 00:35:51.070 [2024-11-26 17:32:51.515604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:51.070 passed 00:35:51.070 Test: blockdev nvme passthru rw ...passed 00:35:51.070 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:32:51.516589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:51.070 [2024-11-26 17:32:51.516626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:51.070 passed 00:35:51.070 Test: blockdev nvme admin passthru ...passed 00:35:51.070 Test: blockdev copy ...passed 00:35:51.070 Suite: bdevio tests on: Nvme2n1 00:35:51.070 Test: blockdev write read block ...passed 00:35:51.070 Test: blockdev write zeroes read block ...passed 00:35:51.070 Test: blockdev write zeroes read no split ...passed 00:35:51.070 Test: blockdev write zeroes read split ...passed 00:35:51.070 Test: blockdev write zeroes read split partial ...passed 00:35:51.070 Test: blockdev reset ...[2024-11-26 17:32:51.594364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:35:51.070 [2024-11-26 17:32:51.599237] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:35:51.070 passed 00:35:51.070 Test: blockdev write read 8 blocks ...passed 00:35:51.070 Test: blockdev write read size > 128k ...passed 00:35:51.070 Test: blockdev write read invalid size ...passed 00:35:51.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.070 Test: blockdev write read max offset ...passed 00:35:51.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.070 Test: blockdev writev readv 8 blocks ...passed 00:35:51.070 Test: blockdev writev readv 30 x 1block ...passed 00:35:51.070 Test: blockdev writev readv block ...passed 00:35:51.070 Test: blockdev writev readv size > 128k ...passed 00:35:51.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:51.070 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.608471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0834000 len:0x1000 00:35:51.070 [2024-11-26 17:32:51.608562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:51.070 passed 00:35:51.070 Test: blockdev nvme passthru rw ...passed 00:35:51.070 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:32:51.609344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:35:51.070 [2024-11-26 17:32:51.609379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:35:51.070 passed 00:35:51.070 Test: blockdev nvme admin passthru ...passed 00:35:51.070 Test: blockdev copy ...passed 00:35:51.070 Suite: bdevio tests on: Nvme1n1p2 00:35:51.070 Test: blockdev write read block ...passed 00:35:51.070 Test: blockdev write zeroes read block ...passed 00:35:51.070 Test: blockdev write zeroes read no split ...passed 00:35:51.070 Test: blockdev write zeroes read split ...passed 00:35:51.070 Test: blockdev write zeroes read split partial ...passed 00:35:51.070 Test: blockdev reset ...[2024-11-26 17:32:51.690099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:35:51.070 [2024-11-26 17:32:51.694782] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:35:51.070 passed 00:35:51.070 Test: blockdev write read 8 blocks ...passed 00:35:51.070 Test: blockdev write read size > 128k ...passed 00:35:51.070 Test: blockdev write read invalid size ...passed 00:35:51.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.070 Test: blockdev write read max offset ...passed 00:35:51.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.070 Test: blockdev writev readv 8 blocks ...passed 00:35:51.070 Test: blockdev writev readv 30 x 1block ...passed 00:35:51.070 Test: blockdev writev readv block ...passed 00:35:51.070 Test: blockdev writev readv size > 128k ...passed 00:35:51.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:51.070 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.704833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d0830000 len:0x1000 00:35:51.070 [2024-11-26 17:32:51.704909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:51.070 passed 00:35:51.071 Test: blockdev nvme passthru rw ...passed 00:35:51.071 Test: blockdev nvme passthru vendor specific ...passed 00:35:51.071 Test: blockdev nvme admin passthru ...passed 00:35:51.071 Test: blockdev copy ...passed 00:35:51.071 Suite: bdevio tests on: Nvme1n1p1 00:35:51.071 Test: blockdev write read block ...passed 00:35:51.071 Test: blockdev write zeroes read block ...passed 00:35:51.071 Test: blockdev write zeroes read no split ...passed 00:35:51.071 Test: blockdev write zeroes read split ...passed 00:35:51.330 Test: blockdev write zeroes read split partial ...passed 00:35:51.330 Test: blockdev reset ...[2024-11-26 17:32:51.777199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:35:51.330 [2024-11-26 17:32:51.781893] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:35:51.330 passed 00:35:51.330 Test: blockdev write read 8 blocks ...passed 00:35:51.330 Test: blockdev write read size > 128k ...passed 00:35:51.330 Test: blockdev write read invalid size ...passed 00:35:51.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.330 Test: blockdev write read max offset ...passed 00:35:51.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.330 Test: blockdev writev readv 8 blocks ...passed 00:35:51.330 Test: blockdev writev readv 30 x 1block ...passed 00:35:51.330 Test: blockdev writev readv block ...passed 00:35:51.330 Test: blockdev writev readv size > 128k ...passed 00:35:51.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:51.330 Test: blockdev comparev and writev ...[2024-11-26 17:32:51.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bcc0e000 len:0x1000 00:35:51.330 [2024-11-26 17:32:51.791854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:35:51.330 passed 00:35:51.330 Test: blockdev nvme passthru rw ...passed 00:35:51.330 Test: blockdev nvme passthru vendor specific ...passed 00:35:51.330 Test: blockdev nvme admin passthru ...passed 00:35:51.330 Test: blockdev copy ...passed 00:35:51.330 Suite: bdevio tests on: Nvme0n1 00:35:51.330 Test: blockdev write read block ...passed 00:35:51.330 Test: blockdev write zeroes read block ...passed 00:35:51.330 Test: blockdev write zeroes read no split ...passed 00:35:51.330 Test: blockdev write zeroes read split ...passed 00:35:51.330 Test: blockdev write zeroes read split partial ...passed 00:35:51.330 Test: blockdev reset ...[2024-11-26 17:32:51.863994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:35:51.330 [2024-11-26 17:32:51.868880] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:35:51.330 passed 00:35:51.330 Test: blockdev write read 8 blocks ...passed 00:35:51.330 Test: blockdev write read size > 128k ...passed 00:35:51.330 Test: blockdev write read invalid size ...passed 00:35:51.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:51.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:51.330 Test: blockdev write read max offset ...passed 00:35:51.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:51.330 Test: blockdev writev readv 8 blocks ...passed 00:35:51.330 Test: blockdev writev readv 30 x 1block ...passed 00:35:51.330 Test: blockdev writev readv block ...passed 00:35:51.330 Test: blockdev writev readv size > 128k ...passed 00:35:51.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:51.330 Test: blockdev comparev and writev ...passed 00:35:51.330 Test: blockdev nvme passthru rw ...[2024-11-26 17:32:51.878606] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:35:51.330 separate metadata which is not supported yet. 00:35:51.330 passed 00:35:51.330 Test: blockdev nvme passthru vendor specific ...[2024-11-26 17:32:51.879475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:35:51.330 passed 00:35:51.330 Test: blockdev nvme admin passthru ...[2024-11-26 17:32:51.879615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:35:51.330 passed 00:35:51.330 Test: blockdev copy ...passed 00:35:51.330 00:35:51.330 Run Summary: Type Total Ran Passed Failed Inactive 00:35:51.330 suites 7 7 n/a 0 0 00:35:51.330 tests 161 161 161 0 0 00:35:51.330 asserts 1025 1025 1025 0 n/a 00:35:51.330 00:35:51.330 Elapsed time = 1.746 seconds 00:35:51.330 0 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62648 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62648 ']' 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62648 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62648 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:51.330 killing process with pid 62648 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62648' 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62648 00:35:51.330 17:32:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62648 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:35:52.731 00:35:52.731 real 0m2.971s 00:35:52.731 user 0m7.648s 00:35:52.731 sys 0m0.410s 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:35:52.731 ************************************ 00:35:52.731 END TEST bdev_bounds 00:35:52.731 ************************************ 00:35:52.731 17:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:35:52.731 17:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:52.731 17:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.731 17:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:35:52.731 ************************************ 00:35:52.731 START TEST bdev_nbd 00:35:52.731 ************************************ 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:35:52.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62713 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62713 /var/tmp/spdk-nbd.sock 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62713 ']' 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.731 17:32:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:35:52.731 [2024-11-26 17:32:53.181623] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:35:52.731 [2024-11-26 17:32:53.182101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.731 [2024-11-26 17:32:53.362298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.989 [2024-11-26 17:32:53.472823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:53.557 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.816 1+0 records in 00:35:53.816 1+0 records out 00:35:53.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096913 s, 4.2 MB/s 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:53.816 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.076 1+0 records in 00:35:54.076 1+0 records out 00:35:54.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935324 s, 4.4 MB/s 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:54.076 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.335 1+0 records in 00:35:54.335 1+0 records out 00:35:54.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482466 s, 8.5 MB/s 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:54.335 17:32:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:35:54.594 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.595 1+0 records in 00:35:54.595 1+0 records out 00:35:54.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096856 s, 4.2 MB/s 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:54.595 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:54.854 1+0 records in 00:35:54.854 1+0 records out 00:35:54.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541321 s, 7.6 MB/s 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:54.854 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:55.114 1+0 records in 00:35:55.114 1+0 records out 00:35:55.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722617 s, 5.7 MB/s 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:55.114 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:55.373 1+0 records in 00:35:55.373 1+0 records out 00:35:55.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710965 s, 5.8 MB/s 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:35:55.373 17:32:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:55.633 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd0", 00:35:55.633 "bdev_name": "Nvme0n1" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd1", 00:35:55.633 "bdev_name": "Nvme1n1p1" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd2", 00:35:55.633 "bdev_name": "Nvme1n1p2" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd3", 00:35:55.633 "bdev_name": "Nvme2n1" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd4", 00:35:55.633 "bdev_name": "Nvme2n2" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd5", 00:35:55.633 "bdev_name": "Nvme2n3" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd6", 00:35:55.633 "bdev_name": "Nvme3n1" 00:35:55.633 } 00:35:55.633 ]' 00:35:55.633 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:35:55.633 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd0", 00:35:55.633 "bdev_name": "Nvme0n1" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd1", 00:35:55.633 "bdev_name": "Nvme1n1p1" 00:35:55.633 }, 00:35:55.633 { 00:35:55.633 "nbd_device": "/dev/nbd2", 00:35:55.633 "bdev_name": "Nvme1n1p2" 00:35:55.634 }, 00:35:55.634 { 00:35:55.634 "nbd_device": "/dev/nbd3", 00:35:55.634 "bdev_name": "Nvme2n1" 00:35:55.634 }, 00:35:55.634 { 00:35:55.634 "nbd_device": "/dev/nbd4", 00:35:55.634 "bdev_name": "Nvme2n2" 00:35:55.634 }, 00:35:55.634 { 00:35:55.634 "nbd_device": "/dev/nbd5", 00:35:55.634 "bdev_name": "Nvme2n3" 00:35:55.634 }, 00:35:55.634 { 00:35:55.634 "nbd_device": "/dev/nbd6", 00:35:55.634 "bdev_name": "Nvme3n1" 00:35:55.634 } 00:35:55.634 ]' 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:55.634 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:55.893 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.152 17:32:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.412 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.672 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:35:56.931 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:35:56.931 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:56.932 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:57.191 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.450 17:32:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:57.450 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:35:57.450 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:35:57.450 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:35:57.710 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:35:57.711 /dev/nbd0 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:57.711 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:57.970 1+0 records in 00:35:57.970 1+0 records out 00:35:57.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619387 s, 6.6 MB/s 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:35:57.970 /dev/nbd1 00:35:57.970 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:57.971 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:58.228 1+0 records in 00:35:58.229 1+0 records out 00:35:58.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727727 s, 5.6 MB/s 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:35:58.229 /dev/nbd10 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:58.229 1+0 records in 00:35:58.229 1+0 records out 00:35:58.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738649 s, 5.5 MB/s 00:35:58.229 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:58.487 17:32:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:35:58.487 /dev/nbd11 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:58.487 1+0 records in 00:35:58.487 1+0 records out 00:35:58.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103938 s, 3.9 MB/s 00:35:58.487 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:35:58.745 /dev/nbd12 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:35:58.745 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:58.746 1+0 records in 00:35:58.746 1+0 records out 00:35:58.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745816 s, 5.5 MB/s 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:58.746 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:35:59.311 /dev/nbd13 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:59.311 1+0 records in 00:35:59.311 1+0 records out 00:35:59.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826433 s, 5.0 MB/s 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:35:59.311 /dev/nbd14 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:35:59.311 17:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:35:59.311 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:59.311 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:59.311 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:59.570 1+0 records in 00:35:59.570 1+0 records out 00:35:59.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00487966 s, 839 kB/s 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:35:59.570 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd0", 00:35:59.831 "bdev_name": "Nvme0n1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd1", 00:35:59.831 "bdev_name": "Nvme1n1p1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd10", 00:35:59.831 "bdev_name": "Nvme1n1p2" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd11", 00:35:59.831 "bdev_name": "Nvme2n1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd12", 00:35:59.831 "bdev_name": "Nvme2n2" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd13", 00:35:59.831 "bdev_name": "Nvme2n3" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd14", 00:35:59.831 "bdev_name": "Nvme3n1" 00:35:59.831 } 00:35:59.831 ]' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd0", 00:35:59.831 "bdev_name": "Nvme0n1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd1", 00:35:59.831 "bdev_name": "Nvme1n1p1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd10", 00:35:59.831 "bdev_name": "Nvme1n1p2" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd11", 00:35:59.831 "bdev_name": "Nvme2n1" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd12", 00:35:59.831 "bdev_name": "Nvme2n2" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd13", 00:35:59.831 "bdev_name": "Nvme2n3" 00:35:59.831 }, 00:35:59.831 { 00:35:59.831 "nbd_device": "/dev/nbd14", 00:35:59.831 "bdev_name": "Nvme3n1" 00:35:59.831 } 00:35:59.831 ]' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:35:59.831 /dev/nbd1 00:35:59.831 /dev/nbd10 00:35:59.831 /dev/nbd11 00:35:59.831 /dev/nbd12 00:35:59.831 /dev/nbd13 00:35:59.831 /dev/nbd14' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:35:59.831 /dev/nbd1 00:35:59.831 /dev/nbd10 00:35:59.831 /dev/nbd11 00:35:59.831 /dev/nbd12 00:35:59.831 /dev/nbd13 00:35:59.831 /dev/nbd14' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:35:59.831 256+0 records in 00:35:59.831 256+0 records out 00:35:59.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115073 s, 91.1 MB/s 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:35:59.831 256+0 records in 00:35:59.831 256+0 records out 00:35:59.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135159 s, 7.8 MB/s 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:35:59.831 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:36:00.097 256+0 records in 00:36:00.097 256+0 records out 00:36:00.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137496 s, 7.6 MB/s 00:36:00.097 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:00.097 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:36:00.097 256+0 records in 00:36:00.097 256+0 records out 00:36:00.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138656 s, 7.6 MB/s 00:36:00.097 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:00.097 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:36:00.357 256+0 records in 00:36:00.357 256+0 records out 00:36:00.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1413 s, 7.4 MB/s 00:36:00.357 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:00.357 17:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:36:00.357 256+0 records in 00:36:00.357 256+0 records out 00:36:00.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136612 s, 7.7 MB/s 00:36:00.357 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:00.357 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:36:00.616 256+0 records in 00:36:00.616 256+0 records out 00:36:00.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13773 s, 7.6 MB/s 00:36:00.616 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:00.616 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:36:00.875 256+0 records in 00:36:00.875 256+0 records out 00:36:00.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139173 s, 7.5 MB/s 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:00.875 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.135 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.395 17:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.655 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.914 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:01.915 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:02.174 17:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:02.434 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:36:02.694 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:36:02.953 malloc_lvol_verify 00:36:02.953 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:36:03.212 a4463f49-0e96-43f6-973d-29b2ff18cf36 00:36:03.212 17:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:36:03.471 74c7ae74-5467-47fa-940a-2c89c8c86ba1 00:36:03.471 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:36:03.729 /dev/nbd0 00:36:03.729 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:36:03.729 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:36:03.730 mke2fs 1.47.0 (5-Feb-2023) 00:36:03.730 Discarding device blocks: 0/4096 done 00:36:03.730 Creating filesystem with 4096 1k blocks and 1024 inodes 00:36:03.730 00:36:03.730 Allocating group tables: 0/1 done 00:36:03.730 Writing inode tables: 0/1 done 00:36:03.730 Creating journal (1024 blocks): done 00:36:03.730 Writing superblocks and filesystem accounting information: 0/1 done 00:36:03.730 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:03.730 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62713 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62713 ']' 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62713 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62713 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62713' 00:36:03.989 killing process with pid 62713 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62713 00:36:03.989 17:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62713 00:36:05.367 17:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:36:05.367 00:36:05.367 real 0m12.686s 00:36:05.367 user 0m16.407s 00:36:05.367 sys 0m5.497s 00:36:05.367 17:33:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.367 17:33:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:05.367 ************************************ 00:36:05.367 END TEST bdev_nbd 00:36:05.367 ************************************ 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:36:05.367 skipping fio tests on NVMe due to multi-ns failures. 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:05.367 17:33:05 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:05.367 17:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:36:05.367 17:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.367 17:33:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:05.367 ************************************ 00:36:05.367 START TEST bdev_verify 00:36:05.367 ************************************ 00:36:05.368 17:33:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:05.368 [2024-11-26 17:33:05.933104] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:05.368 [2024-11-26 17:33:05.933801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:36:05.626 [2024-11-26 17:33:06.118221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:05.626 [2024-11-26 17:33:06.236084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.626 [2024-11-26 17:33:06.236129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.630 Running I/O for 5 seconds... 00:36:08.502 20736.00 IOPS, 81.00 MiB/s [2024-11-26T17:33:10.573Z] 20704.00 IOPS, 80.88 MiB/s [2024-11-26T17:33:11.511Z] 21632.00 IOPS, 84.50 MiB/s [2024-11-26T17:33:12.448Z] 21296.00 IOPS, 83.19 MiB/s [2024-11-26T17:33:12.448Z] 21312.00 IOPS, 83.25 MiB/s 00:36:11.754 Latency(us) 00:36:11.754 [2024-11-26T17:33:12.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.754 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0xbd0bd 00:36:11.754 Nvme0n1 : 5.07 1527.14 5.97 0.00 0.00 83443.86 13580.95 77906.25 00:36:11.754 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:36:11.754 Nvme0n1 : 5.09 1482.51 5.79 0.00 0.00 85825.80 17476.27 75800.67 00:36:11.754 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x4ff80 00:36:11.754 Nvme1n1p1 : 5.07 1526.01 5.96 0.00 0.00 83377.96 14949.58 71589.53 00:36:11.754 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x4ff80 length 0x4ff80 00:36:11.754 Nvme1n1p1 : 5.10 1482.16 5.79 0.00 0.00 85726.43 15686.53 77906.25 00:36:11.754 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x4ff7f 00:36:11.754 Nvme1n1p2 : 5.08 1525.52 5.96 0.00 0.00 83260.79 14423.18 65693.92 00:36:11.754 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:36:11.754 Nvme1n1p2 : 5.10 1481.82 5.79 0.00 0.00 85632.82 14739.02 80432.94 00:36:11.754 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x80000 00:36:11.754 Nvme2n1 : 5.08 1524.93 5.96 0.00 0.00 83165.87 13580.95 63588.34 00:36:11.754 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x80000 length 0x80000 00:36:11.754 Nvme2n1 : 5.10 1481.09 5.79 0.00 0.00 85514.16 15581.25 77906.25 00:36:11.754 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x80000 00:36:11.754 Nvme2n2 : 5.09 1534.19 5.99 0.00 0.00 82782.64 8053.82 60640.54 00:36:11.754 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x80000 length 0x80000 00:36:11.754 Nvme2n2 : 5.10 1480.77 5.78 0.00 0.00 85383.51 15160.13 77906.25 00:36:11.754 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x80000 00:36:11.754 Nvme2n3 : 5.09 1533.77 5.99 0.00 0.00 82685.56 8001.18 62746.11 00:36:11.754 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x80000 length 0x80000 00:36:11.754 Nvme2n3 : 5.07 1476.05 5.77 0.00 0.00 86244.67 17581.55 78748.48 00:36:11.754 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x0 length 0x20000 00:36:11.754 Nvme3n1 : 5.09 1533.39 5.99 0.00 0.00 82555.79 7580.07 66115.03 00:36:11.754 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:11.754 Verification LBA range: start 0x20000 length 0x20000 00:36:11.754 Nvme3n1 : 5.09 1483.18 5.79 0.00 0.00 85955.55 16212.92 73695.10 00:36:11.754 [2024-11-26T17:33:12.448Z] =================================================================================================================== 00:36:11.754 [2024-11-26T17:33:12.448Z] Total : 21072.54 82.31 0.00 0.00 84375.60 7580.07 80432.94 00:36:13.132 00:36:13.132 real 0m7.759s 00:36:13.132 user 0m14.320s 00:36:13.132 sys 0m0.339s 00:36:13.132 17:33:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.132 ************************************ 00:36:13.132 17:33:13 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:36:13.132 END TEST bdev_verify 00:36:13.132 ************************************ 00:36:13.132 17:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:13.132 17:33:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:36:13.132 17:33:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:13.132 17:33:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:13.132 ************************************ 00:36:13.132 START TEST bdev_verify_big_io 00:36:13.132 ************************************ 00:36:13.132 17:33:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:13.132 [2024-11-26 17:33:13.761083] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:13.132 [2024-11-26 17:33:13.761211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63244 ] 00:36:13.390 [2024-11-26 17:33:13.943454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:13.390 [2024-11-26 17:33:14.051967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.390 [2024-11-26 17:33:14.051998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.327 Running I/O for 5 seconds... 00:36:19.055 2538.00 IOPS, 158.62 MiB/s [2024-11-26T17:33:21.127Z] 3103.00 IOPS, 193.94 MiB/s [2024-11-26T17:33:21.127Z] 3905.00 IOPS, 244.06 MiB/s 00:36:20.433 Latency(us) 00:36:20.433 [2024-11-26T17:33:21.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.433 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0xbd0b 00:36:20.433 Nvme0n1 : 5.66 128.55 8.03 0.00 0.00 954216.34 41900.93 1516013.49 00:36:20.433 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0xbd0b length 0xbd0b 00:36:20.433 Nvme0n1 : 5.58 126.27 7.89 0.00 0.00 967117.08 24003.55 1489062.14 00:36:20.433 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x4ff8 00:36:20.433 Nvme1n1p1 : 5.67 140.86 8.80 0.00 0.00 863694.18 97698.65 855705.39 00:36:20.433 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x4ff8 length 0x4ff8 00:36:20.433 Nvme1n1p1 : 5.67 146.80 9.17 0.00 0.00 828018.36 80432.94 798433.77 00:36:20.433 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x4ff7 00:36:20.433 Nvme1n1p2 : 5.73 139.17 8.70 0.00 0.00 844007.58 101067.57 798433.77 00:36:20.433 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x4ff7 length 0x4ff7 00:36:20.433 Nvme1n1p2 : 5.80 150.76 9.42 0.00 0.00 785667.26 69062.84 805171.61 00:36:20.433 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x8000 00:36:20.433 Nvme2n1 : 5.74 144.48 9.03 0.00 0.00 806697.84 67799.49 811909.45 00:36:20.433 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x8000 length 0x8000 00:36:20.433 Nvme2n1 : 5.74 142.11 8.88 0.00 0.00 817018.82 69483.95 1495799.98 00:36:20.433 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x8000 00:36:20.433 Nvme2n2 : 5.81 150.54 9.41 0.00 0.00 758604.93 29688.60 832122.96 00:36:20.433 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x8000 length 0x8000 00:36:20.433 Nvme2n2 : 5.81 144.52 9.03 0.00 0.00 782845.75 61061.65 1516013.49 00:36:20.433 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x8000 00:36:20.433 Nvme2n3 : 5.81 153.90 9.62 0.00 0.00 727689.42 37058.11 869181.07 00:36:20.433 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x8000 length 0x8000 00:36:20.433 Nvme2n3 : 5.85 156.18 9.76 0.00 0.00 711619.94 15160.13 1529489.17 00:36:20.433 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x0 length 0x2000 00:36:20.433 Nvme3n1 : 5.82 164.64 10.29 0.00 0.00 666595.05 4158.51 923083.77 00:36:20.433 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:20.433 Verification LBA range: start 0x2000 length 0x2000 00:36:20.433 Nvme3n1 : 5.85 166.82 10.43 0.00 0.00 652091.11 2566.17 1563178.36 00:36:20.433 [2024-11-26T17:33:21.127Z] =================================================================================================================== 00:36:20.433 [2024-11-26T17:33:21.127Z] Total : 2055.62 128.48 0.00 0.00 789706.27 2566.17 1563178.36 00:36:22.335 00:36:22.335 real 0m9.031s 00:36:22.335 user 0m16.874s 00:36:22.335 sys 0m0.343s 00:36:22.335 17:33:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.335 17:33:22 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:36:22.335 ************************************ 00:36:22.335 END TEST bdev_verify_big_io 00:36:22.335 ************************************ 00:36:22.335 17:33:22 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:22.335 17:33:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:22.335 17:33:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.335 17:33:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:22.335 ************************************ 00:36:22.335 START TEST bdev_write_zeroes 00:36:22.335 ************************************ 00:36:22.335 17:33:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:22.335 [2024-11-26 17:33:22.869415] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:22.335 [2024-11-26 17:33:22.869543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63357 ] 00:36:22.594 [2024-11-26 17:33:23.050127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:22.594 [2024-11-26 17:33:23.162077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.163 Running I/O for 1 seconds... 00:36:24.541 67200.00 IOPS, 262.50 MiB/s 00:36:24.541 Latency(us) 00:36:24.541 [2024-11-26T17:33:25.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.541 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.541 Nvme0n1 : 1.02 9561.36 37.35 0.00 0.00 13353.37 11317.46 34741.98 00:36:24.541 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme1n1p1 : 1.03 9550.51 37.31 0.00 0.00 13350.60 11370.10 35794.76 00:36:24.542 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme1n1p2 : 1.03 9540.34 37.27 0.00 0.00 13315.72 10948.99 33689.19 00:36:24.542 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme2n1 : 1.03 9531.90 37.23 0.00 0.00 13243.44 11317.46 28425.25 00:36:24.542 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme2n2 : 1.03 9523.43 37.20 0.00 0.00 13214.21 11317.46 26530.24 00:36:24.542 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme2n3 : 1.03 9570.67 37.39 0.00 0.00 13143.11 6843.12 24424.66 00:36:24.542 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:24.542 Nvme3n1 : 1.03 9561.79 37.35 0.00 0.00 13109.89 7001.03 22529.64 00:36:24.542 [2024-11-26T17:33:25.236Z] =================================================================================================================== 00:36:24.542 [2024-11-26T17:33:25.236Z] Total : 66840.01 261.09 0.00 0.00 13246.97 6843.12 35794.76 00:36:25.552 00:36:25.552 real 0m3.285s 00:36:25.552 user 0m2.905s 00:36:25.552 sys 0m0.266s 00:36:25.552 17:33:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.552 17:33:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:36:25.552 ************************************ 00:36:25.552 END TEST bdev_write_zeroes 00:36:25.552 ************************************ 00:36:25.552 17:33:26 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:25.552 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:25.552 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.552 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:25.552 ************************************ 00:36:25.552 START TEST bdev_json_nonenclosed 00:36:25.552 ************************************ 00:36:25.552 17:33:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:25.552 [2024-11-26 17:33:26.225174] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:25.552 [2024-11-26 17:33:26.225291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63417 ] 00:36:25.811 [2024-11-26 17:33:26.416042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.070 [2024-11-26 17:33:26.528124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.071 [2024-11-26 17:33:26.528228] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:36:26.071 [2024-11-26 17:33:26.528251] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:26.071 [2024-11-26 17:33:26.528263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:26.330 00:36:26.330 real 0m0.653s 00:36:26.330 user 0m0.395s 00:36:26.330 sys 0m0.152s 00:36:26.330 17:33:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.330 17:33:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:36:26.330 ************************************ 00:36:26.330 END TEST bdev_json_nonenclosed 00:36:26.330 ************************************ 00:36:26.330 17:33:26 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:26.330 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:36:26.330 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.330 17:33:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:26.330 ************************************ 00:36:26.330 START TEST bdev_json_nonarray 00:36:26.330 ************************************ 00:36:26.330 17:33:26 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:26.330 [2024-11-26 17:33:26.950616] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:26.330 [2024-11-26 17:33:26.950736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63437 ] 00:36:26.590 [2024-11-26 17:33:27.131053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.590 [2024-11-26 17:33:27.245516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.590 [2024-11-26 17:33:27.245622] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:36:26.590 [2024-11-26 17:33:27.245644] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:36:26.590 [2024-11-26 17:33:27.245656] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:26.848 00:36:26.849 real 0m0.643s 00:36:26.849 user 0m0.401s 00:36:26.849 sys 0m0.137s 00:36:26.849 17:33:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.849 17:33:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:36:26.849 ************************************ 00:36:26.849 END TEST bdev_json_nonarray 00:36:26.849 ************************************ 00:36:27.108 17:33:27 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:36:27.108 17:33:27 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:36:27.108 17:33:27 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:36:27.108 17:33:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.108 17:33:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.108 17:33:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:27.108 ************************************ 00:36:27.108 START TEST bdev_gpt_uuid 00:36:27.108 ************************************ 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63468 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63468 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63468 ']' 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.108 17:33:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:27.108 [2024-11-26 17:33:27.681486] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:27.108 [2024-11-26 17:33:27.681620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63468 ] 00:36:27.367 [2024-11-26 17:33:27.852204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.367 [2024-11-26 17:33:27.967000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.305 17:33:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.305 17:33:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:36:28.305 17:33:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:28.305 17:33:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.305 17:33:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 Some configs were skipped because the RPC state that can call them passed over. 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:36:28.565 { 00:36:28.565 "name": "Nvme1n1p1", 00:36:28.565 "aliases": [ 00:36:28.565 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:36:28.565 ], 00:36:28.565 "product_name": "GPT Disk", 00:36:28.565 "block_size": 4096, 00:36:28.565 "num_blocks": 655104, 00:36:28.565 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:36:28.565 "assigned_rate_limits": { 00:36:28.565 "rw_ios_per_sec": 0, 00:36:28.565 "rw_mbytes_per_sec": 0, 00:36:28.565 "r_mbytes_per_sec": 0, 00:36:28.565 "w_mbytes_per_sec": 0 00:36:28.565 }, 00:36:28.565 "claimed": false, 00:36:28.565 "zoned": false, 00:36:28.565 "supported_io_types": { 00:36:28.565 "read": true, 00:36:28.565 "write": true, 00:36:28.565 "unmap": true, 00:36:28.565 "flush": true, 00:36:28.565 "reset": true, 00:36:28.565 "nvme_admin": false, 00:36:28.565 "nvme_io": false, 00:36:28.565 "nvme_io_md": false, 00:36:28.565 "write_zeroes": true, 00:36:28.565 "zcopy": false, 00:36:28.565 "get_zone_info": false, 00:36:28.565 "zone_management": false, 00:36:28.565 "zone_append": false, 00:36:28.565 "compare": true, 00:36:28.565 "compare_and_write": false, 00:36:28.565 "abort": true, 00:36:28.565 "seek_hole": false, 00:36:28.565 "seek_data": false, 00:36:28.565 "copy": true, 00:36:28.565 "nvme_iov_md": false 00:36:28.565 }, 00:36:28.565 "driver_specific": { 00:36:28.565 "gpt": { 00:36:28.565 "base_bdev": "Nvme1n1", 00:36:28.565 "offset_blocks": 256, 00:36:28.565 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:36:28.565 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:36:28.565 "partition_name": "SPDK_TEST_first" 00:36:28.565 } 00:36:28.565 } 00:36:28.565 } 00:36:28.565 ]' 00:36:28.565 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:36:28.825 { 00:36:28.825 "name": "Nvme1n1p2", 00:36:28.825 "aliases": [ 00:36:28.825 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:36:28.825 ], 00:36:28.825 "product_name": "GPT Disk", 00:36:28.825 "block_size": 4096, 00:36:28.825 "num_blocks": 655103, 00:36:28.825 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:36:28.825 "assigned_rate_limits": { 00:36:28.825 "rw_ios_per_sec": 0, 00:36:28.825 "rw_mbytes_per_sec": 0, 00:36:28.825 "r_mbytes_per_sec": 0, 00:36:28.825 "w_mbytes_per_sec": 0 00:36:28.825 }, 00:36:28.825 "claimed": false, 00:36:28.825 "zoned": false, 00:36:28.825 "supported_io_types": { 00:36:28.825 "read": true, 00:36:28.825 "write": true, 00:36:28.825 "unmap": true, 00:36:28.825 "flush": true, 00:36:28.825 "reset": true, 00:36:28.825 "nvme_admin": false, 00:36:28.825 "nvme_io": false, 00:36:28.825 "nvme_io_md": false, 00:36:28.825 "write_zeroes": true, 00:36:28.825 "zcopy": false, 00:36:28.825 "get_zone_info": false, 00:36:28.825 "zone_management": false, 00:36:28.825 "zone_append": false, 00:36:28.825 "compare": true, 00:36:28.825 "compare_and_write": false, 00:36:28.825 "abort": true, 00:36:28.825 "seek_hole": false, 00:36:28.825 "seek_data": false, 00:36:28.825 "copy": true, 00:36:28.825 "nvme_iov_md": false 00:36:28.825 }, 00:36:28.825 "driver_specific": { 00:36:28.825 "gpt": { 00:36:28.825 "base_bdev": "Nvme1n1", 00:36:28.825 "offset_blocks": 655360, 00:36:28.825 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:36:28.825 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:36:28.825 "partition_name": "SPDK_TEST_second" 00:36:28.825 } 00:36:28.825 } 00:36:28.825 } 00:36:28.825 ]' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63468 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63468 ']' 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63468 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:36:28.825 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63468 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:29.084 killing process with pid 63468 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63468' 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63468 00:36:29.084 17:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63468 00:36:31.660 00:36:31.660 real 0m4.371s 00:36:31.660 user 0m4.467s 00:36:31.660 sys 0m0.542s 00:36:31.660 17:33:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.660 17:33:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:36:31.660 ************************************ 00:36:31.660 END TEST bdev_gpt_uuid 00:36:31.660 ************************************ 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:36:31.660 17:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:31.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:32.179 Waiting for block devices as requested 00:36:32.179 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.438 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.438 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.697 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:37.975 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:37.975 17:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:36:37.975 17:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:36:37.975 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:36:37.975 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:36:37.975 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:36:37.975 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:36:37.975 17:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:36:37.975 00:36:37.975 real 1m5.591s 00:36:37.975 user 1m21.703s 00:36:37.975 sys 0m12.351s 00:36:37.975 17:33:38 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.975 ************************************ 00:36:37.975 END TEST blockdev_nvme_gpt 00:36:37.975 ************************************ 00:36:37.975 17:33:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:36:37.975 17:33:38 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:36:37.975 17:33:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:37.975 17:33:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:37.975 17:33:38 -- common/autotest_common.sh@10 -- # set +x 00:36:37.975 ************************************ 00:36:37.975 START TEST nvme 00:36:37.975 ************************************ 00:36:37.975 17:33:38 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:36:38.233 * Looking for test storage... 00:36:38.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:36:38.233 17:33:38 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:38.233 17:33:38 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:36:38.233 17:33:38 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:38.233 17:33:38 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:38.233 17:33:38 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:38.233 17:33:38 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:38.233 17:33:38 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:38.233 17:33:38 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:36:38.233 17:33:38 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:36:38.233 17:33:38 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:36:38.233 17:33:38 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:36:38.233 17:33:38 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:36:38.233 17:33:38 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:38.233 17:33:38 nvme -- scripts/common.sh@344 -- # case "$op" in 00:36:38.233 17:33:38 nvme -- scripts/common.sh@345 -- # : 1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:38.233 17:33:38 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:38.233 17:33:38 nvme -- scripts/common.sh@365 -- # decimal 1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@353 -- # local d=1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:38.233 17:33:38 nvme -- scripts/common.sh@355 -- # echo 1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:36:38.233 17:33:38 nvme -- scripts/common.sh@366 -- # decimal 2 00:36:38.234 17:33:38 nvme -- scripts/common.sh@353 -- # local d=2 00:36:38.234 17:33:38 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:38.234 17:33:38 nvme -- scripts/common.sh@355 -- # echo 2 00:36:38.234 17:33:38 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:36:38.234 17:33:38 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:38.234 17:33:38 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:38.234 17:33:38 nvme -- scripts/common.sh@368 -- # return 0 00:36:38.234 17:33:38 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:38.234 17:33:38 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:38.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.234 --rc genhtml_branch_coverage=1 00:36:38.234 --rc genhtml_function_coverage=1 00:36:38.234 --rc genhtml_legend=1 00:36:38.234 --rc geninfo_all_blocks=1 00:36:38.234 --rc geninfo_unexecuted_blocks=1 00:36:38.234 00:36:38.234 ' 00:36:38.234 17:33:38 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:38.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.234 --rc genhtml_branch_coverage=1 00:36:38.234 --rc genhtml_function_coverage=1 00:36:38.234 --rc genhtml_legend=1 00:36:38.234 --rc geninfo_all_blocks=1 00:36:38.234 --rc geninfo_unexecuted_blocks=1 00:36:38.234 00:36:38.234 ' 00:36:38.234 17:33:38 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:38.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.234 --rc genhtml_branch_coverage=1 00:36:38.234 --rc genhtml_function_coverage=1 00:36:38.234 --rc genhtml_legend=1 00:36:38.234 --rc geninfo_all_blocks=1 00:36:38.234 --rc geninfo_unexecuted_blocks=1 00:36:38.234 00:36:38.234 ' 00:36:38.234 17:33:38 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:38.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:38.234 --rc genhtml_branch_coverage=1 00:36:38.234 --rc genhtml_function_coverage=1 00:36:38.234 --rc genhtml_legend=1 00:36:38.234 --rc geninfo_all_blocks=1 00:36:38.234 --rc geninfo_unexecuted_blocks=1 00:36:38.234 00:36:38.234 ' 00:36:38.234 17:33:38 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:39.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:39.739 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:39.739 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:39.739 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:36:39.739 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:36:39.999 17:33:40 nvme -- nvme/nvme.sh@79 -- # uname 00:36:40.000 17:33:40 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:36:40.000 17:33:40 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:36:40.000 17:33:40 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1075 -- # stubpid=64131 00:36:40.000 Waiting for stub to ready for secondary processes... 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64131 ]] 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:36:40.000 17:33:40 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:36:40.000 [2024-11-26 17:33:40.502662] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:36:40.000 [2024-11-26 17:33:40.502803] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:36:41.037 17:33:41 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:41.037 17:33:41 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64131 ]] 00:36:41.037 17:33:41 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:36:41.037 [2024-11-26 17:33:41.532359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:41.037 [2024-11-26 17:33:41.637592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:41.037 [2024-11-26 17:33:41.637736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.037 [2024-11-26 17:33:41.637768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:41.037 [2024-11-26 17:33:41.655610] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:36:41.037 [2024-11-26 17:33:41.655644] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:41.037 [2024-11-26 17:33:41.672575] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:36:41.037 [2024-11-26 17:33:41.672691] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:36:41.037 [2024-11-26 17:33:41.675479] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:41.037 [2024-11-26 17:33:41.675735] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:36:41.037 [2024-11-26 17:33:41.675835] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:36:41.037 [2024-11-26 17:33:41.680053] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:41.037 [2024-11-26 17:33:41.680293] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:36:41.037 [2024-11-26 17:33:41.680389] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:36:41.037 [2024-11-26 17:33:41.684254] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:36:41.037 [2024-11-26 17:33:41.684528] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:36:41.037 [2024-11-26 17:33:41.684610] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:36:41.037 [2024-11-26 17:33:41.684661] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:36:41.037 [2024-11-26 17:33:41.684712] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:36:41.974 17:33:42 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:36:41.974 done. 00:36:41.974 17:33:42 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:36:41.975 17:33:42 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:36:41.975 17:33:42 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:36:41.975 17:33:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.975 17:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:41.975 ************************************ 00:36:41.975 START TEST nvme_reset 00:36:41.975 ************************************ 00:36:41.975 17:33:42 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:36:42.234 Initializing NVMe Controllers 00:36:42.234 Skipping QEMU NVMe SSD at 0000:00:10.0 00:36:42.234 Skipping QEMU NVMe SSD at 0000:00:11.0 00:36:42.234 Skipping QEMU NVMe SSD at 0000:00:13.0 00:36:42.234 Skipping QEMU NVMe SSD at 0000:00:12.0 00:36:42.234 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:36:42.234 00:36:42.234 real 0m0.295s 00:36:42.234 user 0m0.103s 00:36:42.234 sys 0m0.147s 00:36:42.234 17:33:42 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.234 17:33:42 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:36:42.234 ************************************ 00:36:42.234 END TEST nvme_reset 00:36:42.234 ************************************ 00:36:42.234 17:33:42 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:36:42.234 17:33:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:42.234 17:33:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.234 17:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:42.234 ************************************ 00:36:42.234 START TEST nvme_identify 00:36:42.234 ************************************ 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:36:42.234 17:33:42 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:36:42.234 17:33:42 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:36:42.234 17:33:42 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:36:42.234 17:33:42 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:42.234 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:42.493 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:36:42.493 17:33:42 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:36:42.493 17:33:42 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:36:42.753 ===================================================== 00:36:42.753 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:42.753 ===================================================== 00:36:42.753 Controller Capabilities/Features 00:36:42.753 ================================ 00:36:42.753 Vendor ID: 1b36 00:36:42.753 Subsystem Vendor ID: 1af4 00:36:42.753 Serial Number: 12340 00:36:42.753 Model Number: QEMU NVMe Ctrl 00:36:42.753 Firmware Version: 8.0.0 00:36:42.753 Recommended Arb Burst: 6 00:36:42.753 IEEE OUI Identifier: 00 54 52 00:36:42.753 Multi-path I/O 00:36:42.753 May have multiple subsystem ports: No 00:36:42.753 May have multiple controllers: No 00:36:42.753 Associated with SR-IOV VF: No 00:36:42.753 Max Data Transfer Size: 524288 00:36:42.753 Max Number of Namespaces: 256 00:36:42.753 Max Number of I/O Queues: 64 00:36:42.753 NVMe Specification Version (VS): 1.4 00:36:42.754 NVMe Specification Version (Identify): 1.4 00:36:42.754 Maximum Queue Entries: 2048 00:36:42.754 Contiguous Queues Required: Yes 00:36:42.754 Arbitration Mechanisms Supported 00:36:42.754 Weighted Round Robin: Not Supported 00:36:42.754 Vendor Specific: Not Supported 00:36:42.754 Reset Timeout: 7500 ms 00:36:42.754 Doorbell Stride: 4 bytes 00:36:42.754 NVM Subsystem Reset: Not Supported 00:36:42.754 Command Sets Supported 00:36:42.754 NVM Command Set: Supported 00:36:42.754 Boot Partition: Not Supported 00:36:42.754 Memory Page Size Minimum: 4096 bytes 00:36:42.754 Memory Page Size Maximum: 65536 bytes 00:36:42.754 Persistent Memory Region: Not Supported 00:36:42.754 Optional Asynchronous Events Supported 00:36:42.754 Namespace Attribute Notices: Supported 00:36:42.754 Firmware Activation Notices: Not Supported 00:36:42.754 ANA Change Notices: Not Supported 00:36:42.754 PLE Aggregate Log Change Notices: Not Supported 00:36:42.754 LBA Status Info Alert Notices: Not Supported 00:36:42.754 EGE Aggregate Log Change Notices: Not Supported 00:36:42.754 Normal NVM Subsystem Shutdown event: Not Supported 00:36:42.754 Zone Descriptor Change Notices: Not Supported 00:36:42.754 Discovery Log Change Notices: Not Supported 00:36:42.754 Controller Attributes 00:36:42.754 128-bit Host Identifier: Not Supported 00:36:42.754 Non-Operational Permissive Mode: Not Supported 00:36:42.754 NVM Sets: Not Supported 00:36:42.754 Read Recovery Levels: Not Supported 00:36:42.754 Endurance Groups: Not Supported 00:36:42.754 Predictable Latency Mode: Not Supported 00:36:42.754 Traffic Based Keep ALive: Not Supported 00:36:42.754 Namespace Granularity: Not Supported 00:36:42.754 SQ Associations: Not Supported 00:36:42.754 UUID List: Not Supported 00:36:42.754 Multi-Domain Subsystem: Not Supported 00:36:42.754 Fixed Capacity Management: Not Supported 00:36:42.754 Variable Capacity Management: Not Supported 00:36:42.754 Delete Endurance Group: Not Supported 00:36:42.754 Delete NVM Set: Not Supported 00:36:42.754 Extended LBA Formats Supported: Supported 00:36:42.754 Flexible Data Placement Supported: Not Supported 00:36:42.754 00:36:42.754 Controller Memory Buffer Support 00:36:42.754 ================================ 00:36:42.754 Supported: No 00:36:42.754 00:36:42.754 Persistent Memory Region Support 00:36:42.754 ================================ 00:36:42.754 Supported: No 00:36:42.754 00:36:42.754 Admin Command Set Attributes 00:36:42.754 ============================ 00:36:42.754 Security Send/Receive: Not Supported 00:36:42.754 Format NVM: Supported 00:36:42.754 Firmware Activate/Download: Not Supported 00:36:42.754 Namespace Management: Supported 00:36:42.754 Device Self-Test: Not Supported 00:36:42.754 Directives: Supported 00:36:42.754 NVMe-MI: Not Supported 00:36:42.754 Virtualization Management: Not Supported 00:36:42.754 Doorbell Buffer Config: Supported 00:36:42.754 Get LBA Status Capability: Not Supported 00:36:42.754 Command & Feature Lockdown Capability: Not Supported 00:36:42.754 Abort Command Limit: 4 00:36:42.754 Async Event Request Limit: 4 00:36:42.754 Number of Firmware Slots: N/A 00:36:42.754 Firmware Slot 1 Read-Only: N/A 00:36:42.754 Firmware Activation Without Reset: N/A 00:36:42.754 Multiple Update Detection Support: N/A 00:36:42.754 Firmware Update Gr[2024-11-26 17:33:43.196258] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64159 terminated unexpected 00:36:42.754 anularity: No Information Provided 00:36:42.754 Per-Namespace SMART Log: Yes 00:36:42.754 Asymmetric Namespace Access Log Page: Not Supported 00:36:42.754 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:36:42.754 Command Effects Log Page: Supported 00:36:42.754 Get Log Page Extended Data: Supported 00:36:42.754 Telemetry Log Pages: Not Supported 00:36:42.754 Persistent Event Log Pages: Not Supported 00:36:42.754 Supported Log Pages Log Page: May Support 00:36:42.754 Commands Supported & Effects Log Page: Not Supported 00:36:42.754 Feature Identifiers & Effects Log Page:May Support 00:36:42.754 NVMe-MI Commands & Effects Log Page: May Support 00:36:42.754 Data Area 4 for Telemetry Log: Not Supported 00:36:42.754 Error Log Page Entries Supported: 1 00:36:42.754 Keep Alive: Not Supported 00:36:42.754 00:36:42.754 NVM Command Set Attributes 00:36:42.754 ========================== 00:36:42.754 Submission Queue Entry Size 00:36:42.754 Max: 64 00:36:42.754 Min: 64 00:36:42.754 Completion Queue Entry Size 00:36:42.754 Max: 16 00:36:42.754 Min: 16 00:36:42.754 Number of Namespaces: 256 00:36:42.754 Compare Command: Supported 00:36:42.754 Write Uncorrectable Command: Not Supported 00:36:42.754 Dataset Management Command: Supported 00:36:42.754 Write Zeroes Command: Supported 00:36:42.754 Set Features Save Field: Supported 00:36:42.754 Reservations: Not Supported 00:36:42.754 Timestamp: Supported 00:36:42.754 Copy: Supported 00:36:42.754 Volatile Write Cache: Present 00:36:42.754 Atomic Write Unit (Normal): 1 00:36:42.754 Atomic Write Unit (PFail): 1 00:36:42.754 Atomic Compare & Write Unit: 1 00:36:42.754 Fused Compare & Write: Not Supported 00:36:42.754 Scatter-Gather List 00:36:42.754 SGL Command Set: Supported 00:36:42.754 SGL Keyed: Not Supported 00:36:42.754 SGL Bit Bucket Descriptor: Not Supported 00:36:42.754 SGL Metadata Pointer: Not Supported 00:36:42.754 Oversized SGL: Not Supported 00:36:42.754 SGL Metadata Address: Not Supported 00:36:42.754 SGL Offset: Not Supported 00:36:42.754 Transport SGL Data Block: Not Supported 00:36:42.754 Replay Protected Memory Block: Not Supported 00:36:42.754 00:36:42.754 Firmware Slot Information 00:36:42.754 ========================= 00:36:42.754 Active slot: 1 00:36:42.754 Slot 1 Firmware Revision: 1.0 00:36:42.754 00:36:42.754 00:36:42.754 Commands Supported and Effects 00:36:42.754 ============================== 00:36:42.754 Admin Commands 00:36:42.754 -------------- 00:36:42.754 Delete I/O Submission Queue (00h): Supported 00:36:42.754 Create I/O Submission Queue (01h): Supported 00:36:42.754 Get Log Page (02h): Supported 00:36:42.754 Delete I/O Completion Queue (04h): Supported 00:36:42.754 Create I/O Completion Queue (05h): Supported 00:36:42.754 Identify (06h): Supported 00:36:42.754 Abort (08h): Supported 00:36:42.754 Set Features (09h): Supported 00:36:42.754 Get Features (0Ah): Supported 00:36:42.754 Asynchronous Event Request (0Ch): Supported 00:36:42.754 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:42.754 Directive Send (19h): Supported 00:36:42.754 Directive Receive (1Ah): Supported 00:36:42.754 Virtualization Management (1Ch): Supported 00:36:42.754 Doorbell Buffer Config (7Ch): Supported 00:36:42.754 Format NVM (80h): Supported LBA-Change 00:36:42.754 I/O Commands 00:36:42.754 ------------ 00:36:42.754 Flush (00h): Supported LBA-Change 00:36:42.754 Write (01h): Supported LBA-Change 00:36:42.754 Read (02h): Supported 00:36:42.754 Compare (05h): Supported 00:36:42.754 Write Zeroes (08h): Supported LBA-Change 00:36:42.754 Dataset Management (09h): Supported LBA-Change 00:36:42.754 Unknown (0Ch): Supported 00:36:42.754 Unknown (12h): Supported 00:36:42.754 Copy (19h): Supported LBA-Change 00:36:42.754 Unknown (1Dh): Supported LBA-Change 00:36:42.754 00:36:42.754 Error Log 00:36:42.754 ========= 00:36:42.754 00:36:42.754 Arbitration 00:36:42.754 =========== 00:36:42.754 Arbitration Burst: no limit 00:36:42.754 00:36:42.754 Power Management 00:36:42.754 ================ 00:36:42.754 Number of Power States: 1 00:36:42.754 Current Power State: Power State #0 00:36:42.754 Power State #0: 00:36:42.754 Max Power: 25.00 W 00:36:42.754 Non-Operational State: Operational 00:36:42.754 Entry Latency: 16 microseconds 00:36:42.754 Exit Latency: 4 microseconds 00:36:42.754 Relative Read Throughput: 0 00:36:42.754 Relative Read Latency: 0 00:36:42.754 Relative Write Throughput: 0 00:36:42.754 Relative Write Latency: 0 00:36:42.754 Idle Power[2024-11-26 17:33:43.197424] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64159 terminated unexpected 00:36:42.754 : Not Reported 00:36:42.754 Active Power: Not Reported 00:36:42.754 Non-Operational Permissive Mode: Not Supported 00:36:42.754 00:36:42.754 Health Information 00:36:42.754 ================== 00:36:42.754 Critical Warnings: 00:36:42.754 Available Spare Space: OK 00:36:42.754 Temperature: OK 00:36:42.754 Device Reliability: OK 00:36:42.754 Read Only: No 00:36:42.754 Volatile Memory Backup: OK 00:36:42.754 Current Temperature: 323 Kelvin (50 Celsius) 00:36:42.754 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:42.754 Available Spare: 0% 00:36:42.754 Available Spare Threshold: 0% 00:36:42.754 Life Percentage Used: 0% 00:36:42.754 Data Units Read: 753 00:36:42.755 Data Units Written: 682 00:36:42.755 Host Read Commands: 36685 00:36:42.755 Host Write Commands: 36471 00:36:42.755 Controller Busy Time: 0 minutes 00:36:42.755 Power Cycles: 0 00:36:42.755 Power On Hours: 0 hours 00:36:42.755 Unsafe Shutdowns: 0 00:36:42.755 Unrecoverable Media Errors: 0 00:36:42.755 Lifetime Error Log Entries: 0 00:36:42.755 Warning Temperature Time: 0 minutes 00:36:42.755 Critical Temperature Time: 0 minutes 00:36:42.755 00:36:42.755 Number of Queues 00:36:42.755 ================ 00:36:42.755 Number of I/O Submission Queues: 64 00:36:42.755 Number of I/O Completion Queues: 64 00:36:42.755 00:36:42.755 ZNS Specific Controller Data 00:36:42.755 ============================ 00:36:42.755 Zone Append Size Limit: 0 00:36:42.755 00:36:42.755 00:36:42.755 Active Namespaces 00:36:42.755 ================= 00:36:42.755 Namespace ID:1 00:36:42.755 Error Recovery Timeout: Unlimited 00:36:42.755 Command Set Identifier: NVM (00h) 00:36:42.755 Deallocate: Supported 00:36:42.755 Deallocated/Unwritten Error: Supported 00:36:42.755 Deallocated Read Value: All 0x00 00:36:42.755 Deallocate in Write Zeroes: Not Supported 00:36:42.755 Deallocated Guard Field: 0xFFFF 00:36:42.755 Flush: Supported 00:36:42.755 Reservation: Not Supported 00:36:42.755 Metadata Transferred as: Separate Metadata Buffer 00:36:42.755 Namespace Sharing Capabilities: Private 00:36:42.755 Size (in LBAs): 1548666 (5GiB) 00:36:42.755 Capacity (in LBAs): 1548666 (5GiB) 00:36:42.755 Utilization (in LBAs): 1548666 (5GiB) 00:36:42.755 Thin Provisioning: Not Supported 00:36:42.755 Per-NS Atomic Units: No 00:36:42.755 Maximum Single Source Range Length: 128 00:36:42.755 Maximum Copy Length: 128 00:36:42.755 Maximum Source Range Count: 128 00:36:42.755 NGUID/EUI64 Never Reused: No 00:36:42.755 Namespace Write Protected: No 00:36:42.755 Number of LBA Formats: 8 00:36:42.755 Current LBA Format: LBA Format #07 00:36:42.755 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.755 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.755 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.755 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.755 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.755 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.755 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.755 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.755 00:36:42.755 NVM Specific Namespace Data 00:36:42.755 =========================== 00:36:42.755 Logical Block Storage Tag Mask: 0 00:36:42.755 Protection Information Capabilities: 00:36:42.755 16b Guard Protection Information Storage Tag Support: No 00:36:42.755 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.755 Storage Tag Check Read Support: No 00:36:42.755 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.755 ===================================================== 00:36:42.755 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:42.755 ===================================================== 00:36:42.755 Controller Capabilities/Features 00:36:42.755 ================================ 00:36:42.755 Vendor ID: 1b36 00:36:42.755 Subsystem Vendor ID: 1af4 00:36:42.755 Serial Number: 12341 00:36:42.755 Model Number: QEMU NVMe Ctrl 00:36:42.755 Firmware Version: 8.0.0 00:36:42.755 Recommended Arb Burst: 6 00:36:42.755 IEEE OUI Identifier: 00 54 52 00:36:42.755 Multi-path I/O 00:36:42.755 May have multiple subsystem ports: No 00:36:42.755 May have multiple controllers: No 00:36:42.755 Associated with SR-IOV VF: No 00:36:42.755 Max Data Transfer Size: 524288 00:36:42.755 Max Number of Namespaces: 256 00:36:42.755 Max Number of I/O Queues: 64 00:36:42.755 NVMe Specification Version (VS): 1.4 00:36:42.755 NVMe Specification Version (Identify): 1.4 00:36:42.755 Maximum Queue Entries: 2048 00:36:42.755 Contiguous Queues Required: Yes 00:36:42.755 Arbitration Mechanisms Supported 00:36:42.755 Weighted Round Robin: Not Supported 00:36:42.755 Vendor Specific: Not Supported 00:36:42.755 Reset Timeout: 7500 ms 00:36:42.755 Doorbell Stride: 4 bytes 00:36:42.755 NVM Subsystem Reset: Not Supported 00:36:42.755 Command Sets Supported 00:36:42.755 NVM Command Set: Supported 00:36:42.755 Boot Partition: Not Supported 00:36:42.755 Memory Page Size Minimum: 4096 bytes 00:36:42.755 Memory Page Size Maximum: 65536 bytes 00:36:42.755 Persistent Memory Region: Not Supported 00:36:42.755 Optional Asynchronous Events Supported 00:36:42.755 Namespace Attribute Notices: Supported 00:36:42.755 Firmware Activation Notices: Not Supported 00:36:42.755 ANA Change Notices: Not Supported 00:36:42.755 PLE Aggregate Log Change Notices: Not Supported 00:36:42.755 LBA Status Info Alert Notices: Not Supported 00:36:42.755 EGE Aggregate Log Change Notices: Not Supported 00:36:42.755 Normal NVM Subsystem Shutdown event: Not Supported 00:36:42.755 Zone Descriptor Change Notices: Not Supported 00:36:42.755 Discovery Log Change Notices: Not Supported 00:36:42.755 Controller Attributes 00:36:42.755 128-bit Host Identifier: Not Supported 00:36:42.755 Non-Operational Permissive Mode: Not Supported 00:36:42.755 NVM Sets: Not Supported 00:36:42.755 Read Recovery Levels: Not Supported 00:36:42.755 Endurance Groups: Not Supported 00:36:42.755 Predictable Latency Mode: Not Supported 00:36:42.755 Traffic Based Keep ALive: Not Supported 00:36:42.755 Namespace Granularity: Not Supported 00:36:42.755 SQ Associations: Not Supported 00:36:42.755 UUID List: Not Supported 00:36:42.755 Multi-Domain Subsystem: Not Supported 00:36:42.755 Fixed Capacity Management: Not Supported 00:36:42.755 Variable Capacity Management: Not Supported 00:36:42.755 Delete Endurance Group: Not Supported 00:36:42.755 Delete NVM Set: Not Supported 00:36:42.755 Extended LBA Formats Supported: Supported 00:36:42.755 Flexible Data Placement Supported: Not Supported 00:36:42.755 00:36:42.755 Controller Memory Buffer Support 00:36:42.755 ================================ 00:36:42.755 Supported: No 00:36:42.755 00:36:42.755 Persistent Memory Region Support 00:36:42.755 ================================ 00:36:42.755 Supported: No 00:36:42.755 00:36:42.755 Admin Command Set Attributes 00:36:42.755 ============================ 00:36:42.755 Security Send/Receive: Not Supported 00:36:42.755 Format NVM: Supported 00:36:42.755 Firmware Activate/Download: Not Supported 00:36:42.755 Namespace Management: Supported 00:36:42.755 Device Self-Test: Not Supported 00:36:42.755 Directives: Supported 00:36:42.755 NVMe-MI: Not Supported 00:36:42.755 Virtualization Management: Not Supported 00:36:42.755 Doorbell Buffer Config: Supported 00:36:42.755 Get LBA Status Capability: Not Supported 00:36:42.755 Command & Feature Lockdown Capability: Not Supported 00:36:42.755 Abort Command Limit: 4 00:36:42.755 Async Event Request Limit: 4 00:36:42.755 Number of Firmware Slots: N/A 00:36:42.755 Firmware Slot 1 Read-Only: N/A 00:36:42.755 Firmware Activation Without Reset: N/A 00:36:42.755 Multiple Update Detection Support: N/A 00:36:42.755 Firmware Update Granularity: No Information Provided 00:36:42.755 Per-Namespace SMART Log: Yes 00:36:42.755 Asymmetric Namespace Access Log Page: Not Supported 00:36:42.755 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:36:42.755 Command Effects Log Page: Supported 00:36:42.755 Get Log Page Extended Data: Supported 00:36:42.755 Telemetry Log Pages: Not Supported 00:36:42.755 Persistent Event Log Pages: Not Supported 00:36:42.755 Supported Log Pages Log Page: May Support 00:36:42.755 Commands Supported & Effects Log Page: Not Supported 00:36:42.755 Feature Identifiers & Effects Log Page:May Support 00:36:42.755 NVMe-MI Commands & Effects Log Page: May Support 00:36:42.755 Data Area 4 for Telemetry Log: Not Supported 00:36:42.755 Error Log Page Entries Supported: 1 00:36:42.755 Keep Alive: Not Supported 00:36:42.755 00:36:42.755 NVM Command Set Attributes 00:36:42.755 ========================== 00:36:42.755 Submission Queue Entry Size 00:36:42.755 Max: 64 00:36:42.755 Min: 64 00:36:42.755 Completion Queue Entry Size 00:36:42.755 Max: 16 00:36:42.755 Min: 16 00:36:42.755 Number of Namespaces: 256 00:36:42.755 Compare Command: Supported 00:36:42.755 Write Uncorrectable Command: Not Supported 00:36:42.755 Dataset Management Command: Supported 00:36:42.755 Write Zeroes Command: Supported 00:36:42.755 Set Features Save Field: Supported 00:36:42.755 Reservations: Not Supported 00:36:42.756 Timestamp: Supported 00:36:42.756 Copy: Supported 00:36:42.756 Volatile Write Cache: Present 00:36:42.756 Atomic Write Unit (Normal): 1 00:36:42.756 Atomic Write Unit (PFail): 1 00:36:42.756 Atomic Compare & Write Unit: 1 00:36:42.756 Fused Compare & Write: Not Supported 00:36:42.756 Scatter-Gather List 00:36:42.756 SGL Command Set: Supported 00:36:42.756 SGL Keyed: Not Supported 00:36:42.756 SGL Bit Bucket Descriptor: Not Supported 00:36:42.756 SGL Metadata Pointer: Not Supported 00:36:42.756 Oversized SGL: Not Supported 00:36:42.756 SGL Metadata Address: Not Supported 00:36:42.756 SGL Offset: Not Supported 00:36:42.756 Transport SGL Data Block: Not Supported 00:36:42.756 Replay Protected Memory Block: Not Supported 00:36:42.756 00:36:42.756 Firmware Slot Information 00:36:42.756 ========================= 00:36:42.756 Active slot: 1 00:36:42.756 Slot 1 Firmware Revision: 1.0 00:36:42.756 00:36:42.756 00:36:42.756 Commands Supported and Effects 00:36:42.756 ============================== 00:36:42.756 Admin Commands 00:36:42.756 -------------- 00:36:42.756 Delete I/O Submission Queue (00h): Supported 00:36:42.756 Create I/O Submission Queue (01h): Supported 00:36:42.756 Get Log Page (02h): Supported 00:36:42.756 Delete I/O Completion Queue (04h): Supported 00:36:42.756 Create I/O Completion Queue (05h): Supported 00:36:42.756 Identify (06h): Supported 00:36:42.756 Abort (08h): Supported 00:36:42.756 Set Features (09h): Supported 00:36:42.756 Get Features (0Ah): Supported 00:36:42.756 Asynchronous Event Request (0Ch): Supported 00:36:42.756 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:42.756 Directive Send (19h): Supported 00:36:42.756 Directive Receive (1Ah): Supported 00:36:42.756 Virtualization Management (1Ch): Supported 00:36:42.756 Doorbell Buffer Config (7Ch): Supported 00:36:42.756 Format NVM (80h): Supported LBA-Change 00:36:42.756 I/O Commands 00:36:42.756 ------------ 00:36:42.756 Flush (00h): Supported LBA-Change 00:36:42.756 Write (01h): Supported LBA-Change 00:36:42.756 Read (02h): Supported 00:36:42.756 Compare (05h): Supported 00:36:42.756 Write Zeroes (08h): Supported LBA-Change 00:36:42.756 Dataset Management (09h): Supported LBA-Change 00:36:42.756 Unknown (0Ch): Supported 00:36:42.756 Unknown (12h): Supported 00:36:42.756 Copy (19h): Supported LBA-Change 00:36:42.756 Unknown (1Dh): Supported LBA-Change 00:36:42.756 00:36:42.756 Error Log 00:36:42.756 ========= 00:36:42.756 00:36:42.756 Arbitration 00:36:42.756 =========== 00:36:42.756 Arbitration Burst: no limit 00:36:42.756 00:36:42.756 Power Management 00:36:42.756 ================ 00:36:42.756 Number of Power States: 1 00:36:42.756 Current Power State: Power State #0 00:36:42.756 Power State #0: 00:36:42.756 Max Power: 25.00 W 00:36:42.756 Non-Operational State: Operational 00:36:42.756 Entry Latency: 16 microseconds 00:36:42.756 Exit Latency: 4 microseconds 00:36:42.756 Relative Read Throughput: 0 00:36:42.756 Relative Read Latency: 0 00:36:42.756 Relative Write Throughput: 0 00:36:42.756 Relative Write Latency: 0 00:36:42.756 Idle Power: Not Reported 00:36:42.756 Active Power: Not Reported 00:36:42.756 Non-Operational Permissive Mode: Not Supported 00:36:42.756 00:36:42.756 Health Information 00:36:42.756 ================== 00:36:42.756 Critical Warnings: 00:36:42.756 Available Spare Space: OK 00:36:42.756 Temperature: [2024-11-26 17:33:43.198360] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64159 terminated unexpected 00:36:42.756 OK 00:36:42.756 Device Reliability: OK 00:36:42.756 Read Only: No 00:36:42.756 Volatile Memory Backup: OK 00:36:42.756 Current Temperature: 323 Kelvin (50 Celsius) 00:36:42.756 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:42.756 Available Spare: 0% 00:36:42.756 Available Spare Threshold: 0% 00:36:42.756 Life Percentage Used: 0% 00:36:42.756 Data Units Read: 1189 00:36:42.756 Data Units Written: 1055 00:36:42.756 Host Read Commands: 54955 00:36:42.756 Host Write Commands: 53734 00:36:42.756 Controller Busy Time: 0 minutes 00:36:42.756 Power Cycles: 0 00:36:42.756 Power On Hours: 0 hours 00:36:42.756 Unsafe Shutdowns: 0 00:36:42.756 Unrecoverable Media Errors: 0 00:36:42.756 Lifetime Error Log Entries: 0 00:36:42.756 Warning Temperature Time: 0 minutes 00:36:42.756 Critical Temperature Time: 0 minutes 00:36:42.756 00:36:42.756 Number of Queues 00:36:42.756 ================ 00:36:42.756 Number of I/O Submission Queues: 64 00:36:42.756 Number of I/O Completion Queues: 64 00:36:42.756 00:36:42.756 ZNS Specific Controller Data 00:36:42.756 ============================ 00:36:42.756 Zone Append Size Limit: 0 00:36:42.756 00:36:42.756 00:36:42.756 Active Namespaces 00:36:42.756 ================= 00:36:42.756 Namespace ID:1 00:36:42.756 Error Recovery Timeout: Unlimited 00:36:42.756 Command Set Identifier: NVM (00h) 00:36:42.756 Deallocate: Supported 00:36:42.756 Deallocated/Unwritten Error: Supported 00:36:42.756 Deallocated Read Value: All 0x00 00:36:42.756 Deallocate in Write Zeroes: Not Supported 00:36:42.756 Deallocated Guard Field: 0xFFFF 00:36:42.756 Flush: Supported 00:36:42.756 Reservation: Not Supported 00:36:42.756 Namespace Sharing Capabilities: Private 00:36:42.756 Size (in LBAs): 1310720 (5GiB) 00:36:42.756 Capacity (in LBAs): 1310720 (5GiB) 00:36:42.756 Utilization (in LBAs): 1310720 (5GiB) 00:36:42.756 Thin Provisioning: Not Supported 00:36:42.756 Per-NS Atomic Units: No 00:36:42.756 Maximum Single Source Range Length: 128 00:36:42.756 Maximum Copy Length: 128 00:36:42.756 Maximum Source Range Count: 128 00:36:42.756 NGUID/EUI64 Never Reused: No 00:36:42.756 Namespace Write Protected: No 00:36:42.756 Number of LBA Formats: 8 00:36:42.756 Current LBA Format: LBA Format #04 00:36:42.756 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.756 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.756 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.756 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.756 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.756 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.756 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.756 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.756 00:36:42.756 NVM Specific Namespace Data 00:36:42.756 =========================== 00:36:42.756 Logical Block Storage Tag Mask: 0 00:36:42.756 Protection Information Capabilities: 00:36:42.756 16b Guard Protection Information Storage Tag Support: No 00:36:42.756 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.756 Storage Tag Check Read Support: No 00:36:42.756 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.756 ===================================================== 00:36:42.756 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:42.756 ===================================================== 00:36:42.756 Controller Capabilities/Features 00:36:42.756 ================================ 00:36:42.756 Vendor ID: 1b36 00:36:42.756 Subsystem Vendor ID: 1af4 00:36:42.756 Serial Number: 12343 00:36:42.756 Model Number: QEMU NVMe Ctrl 00:36:42.756 Firmware Version: 8.0.0 00:36:42.756 Recommended Arb Burst: 6 00:36:42.756 IEEE OUI Identifier: 00 54 52 00:36:42.756 Multi-path I/O 00:36:42.756 May have multiple subsystem ports: No 00:36:42.756 May have multiple controllers: Yes 00:36:42.756 Associated with SR-IOV VF: No 00:36:42.756 Max Data Transfer Size: 524288 00:36:42.756 Max Number of Namespaces: 256 00:36:42.756 Max Number of I/O Queues: 64 00:36:42.756 NVMe Specification Version (VS): 1.4 00:36:42.756 NVMe Specification Version (Identify): 1.4 00:36:42.756 Maximum Queue Entries: 2048 00:36:42.756 Contiguous Queues Required: Yes 00:36:42.756 Arbitration Mechanisms Supported 00:36:42.756 Weighted Round Robin: Not Supported 00:36:42.756 Vendor Specific: Not Supported 00:36:42.756 Reset Timeout: 7500 ms 00:36:42.756 Doorbell Stride: 4 bytes 00:36:42.756 NVM Subsystem Reset: Not Supported 00:36:42.756 Command Sets Supported 00:36:42.756 NVM Command Set: Supported 00:36:42.756 Boot Partition: Not Supported 00:36:42.756 Memory Page Size Minimum: 4096 bytes 00:36:42.756 Memory Page Size Maximum: 65536 bytes 00:36:42.756 Persistent Memory Region: Not Supported 00:36:42.756 Optional Asynchronous Events Supported 00:36:42.757 Namespace Attribute Notices: Supported 00:36:42.757 Firmware Activation Notices: Not Supported 00:36:42.757 ANA Change Notices: Not Supported 00:36:42.757 PLE Aggregate Log Change Notices: Not Supported 00:36:42.757 LBA Status Info Alert Notices: Not Supported 00:36:42.757 EGE Aggregate Log Change Notices: Not Supported 00:36:42.757 Normal NVM Subsystem Shutdown event: Not Supported 00:36:42.757 Zone Descriptor Change Notices: Not Supported 00:36:42.757 Discovery Log Change Notices: Not Supported 00:36:42.757 Controller Attributes 00:36:42.757 128-bit Host Identifier: Not Supported 00:36:42.757 Non-Operational Permissive Mode: Not Supported 00:36:42.757 NVM Sets: Not Supported 00:36:42.757 Read Recovery Levels: Not Supported 00:36:42.757 Endurance Groups: Supported 00:36:42.757 Predictable Latency Mode: Not Supported 00:36:42.757 Traffic Based Keep ALive: Not Supported 00:36:42.757 Namespace Granularity: Not Supported 00:36:42.757 SQ Associations: Not Supported 00:36:42.757 UUID List: Not Supported 00:36:42.757 Multi-Domain Subsystem: Not Supported 00:36:42.757 Fixed Capacity Management: Not Supported 00:36:42.757 Variable Capacity Management: Not Supported 00:36:42.757 Delete Endurance Group: Not Supported 00:36:42.757 Delete NVM Set: Not Supported 00:36:42.757 Extended LBA Formats Supported: Supported 00:36:42.757 Flexible Data Placement Supported: Supported 00:36:42.757 00:36:42.757 Controller Memory Buffer Support 00:36:42.757 ================================ 00:36:42.757 Supported: No 00:36:42.757 00:36:42.757 Persistent Memory Region Support 00:36:42.757 ================================ 00:36:42.757 Supported: No 00:36:42.757 00:36:42.757 Admin Command Set Attributes 00:36:42.757 ============================ 00:36:42.757 Security Send/Receive: Not Supported 00:36:42.757 Format NVM: Supported 00:36:42.757 Firmware Activate/Download: Not Supported 00:36:42.757 Namespace Management: Supported 00:36:42.757 Device Self-Test: Not Supported 00:36:42.757 Directives: Supported 00:36:42.757 NVMe-MI: Not Supported 00:36:42.757 Virtualization Management: Not Supported 00:36:42.757 Doorbell Buffer Config: Supported 00:36:42.757 Get LBA Status Capability: Not Supported 00:36:42.757 Command & Feature Lockdown Capability: Not Supported 00:36:42.757 Abort Command Limit: 4 00:36:42.757 Async Event Request Limit: 4 00:36:42.757 Number of Firmware Slots: N/A 00:36:42.757 Firmware Slot 1 Read-Only: N/A 00:36:42.757 Firmware Activation Without Reset: N/A 00:36:42.757 Multiple Update Detection Support: N/A 00:36:42.757 Firmware Update Granularity: No Information Provided 00:36:42.757 Per-Namespace SMART Log: Yes 00:36:42.757 Asymmetric Namespace Access Log Page: Not Supported 00:36:42.757 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:36:42.757 Command Effects Log Page: Supported 00:36:42.757 Get Log Page Extended Data: Supported 00:36:42.757 Telemetry Log Pages: Not Supported 00:36:42.757 Persistent Event Log Pages: Not Supported 00:36:42.757 Supported Log Pages Log Page: May Support 00:36:42.757 Commands Supported & Effects Log Page: Not Supported 00:36:42.757 Feature Identifiers & Effects Log Page:May Support 00:36:42.757 NVMe-MI Commands & Effects Log Page: May Support 00:36:42.757 Data Area 4 for Telemetry Log: Not Supported 00:36:42.757 Error Log Page Entries Supported: 1 00:36:42.757 Keep Alive: Not Supported 00:36:42.757 00:36:42.757 NVM Command Set Attributes 00:36:42.757 ========================== 00:36:42.757 Submission Queue Entry Size 00:36:42.757 Max: 64 00:36:42.757 Min: 64 00:36:42.757 Completion Queue Entry Size 00:36:42.757 Max: 16 00:36:42.757 Min: 16 00:36:42.757 Number of Namespaces: 256 00:36:42.757 Compare Command: Supported 00:36:42.757 Write Uncorrectable Command: Not Supported 00:36:42.757 Dataset Management Command: Supported 00:36:42.757 Write Zeroes Command: Supported 00:36:42.757 Set Features Save Field: Supported 00:36:42.757 Reservations: Not Supported 00:36:42.757 Timestamp: Supported 00:36:42.757 Copy: Supported 00:36:42.757 Volatile Write Cache: Present 00:36:42.757 Atomic Write Unit (Normal): 1 00:36:42.757 Atomic Write Unit (PFail): 1 00:36:42.757 Atomic Compare & Write Unit: 1 00:36:42.757 Fused Compare & Write: Not Supported 00:36:42.757 Scatter-Gather List 00:36:42.757 SGL Command Set: Supported 00:36:42.757 SGL Keyed: Not Supported 00:36:42.757 SGL Bit Bucket Descriptor: Not Supported 00:36:42.757 SGL Metadata Pointer: Not Supported 00:36:42.757 Oversized SGL: Not Supported 00:36:42.757 SGL Metadata Address: Not Supported 00:36:42.757 SGL Offset: Not Supported 00:36:42.757 Transport SGL Data Block: Not Supported 00:36:42.757 Replay Protected Memory Block: Not Supported 00:36:42.757 00:36:42.757 Firmware Slot Information 00:36:42.757 ========================= 00:36:42.757 Active slot: 1 00:36:42.757 Slot 1 Firmware Revision: 1.0 00:36:42.757 00:36:42.757 00:36:42.757 Commands Supported and Effects 00:36:42.757 ============================== 00:36:42.757 Admin Commands 00:36:42.757 -------------- 00:36:42.757 Delete I/O Submission Queue (00h): Supported 00:36:42.757 Create I/O Submission Queue (01h): Supported 00:36:42.757 Get Log Page (02h): Supported 00:36:42.757 Delete I/O Completion Queue (04h): Supported 00:36:42.757 Create I/O Completion Queue (05h): Supported 00:36:42.757 Identify (06h): Supported 00:36:42.757 Abort (08h): Supported 00:36:42.757 Set Features (09h): Supported 00:36:42.757 Get Features (0Ah): Supported 00:36:42.757 Asynchronous Event Request (0Ch): Supported 00:36:42.757 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:42.757 Directive Send (19h): Supported 00:36:42.757 Directive Receive (1Ah): Supported 00:36:42.757 Virtualization Management (1Ch): Supported 00:36:42.757 Doorbell Buffer Config (7Ch): Supported 00:36:42.757 Format NVM (80h): Supported LBA-Change 00:36:42.757 I/O Commands 00:36:42.757 ------------ 00:36:42.757 Flush (00h): Supported LBA-Change 00:36:42.757 Write (01h): Supported LBA-Change 00:36:42.757 Read (02h): Supported 00:36:42.757 Compare (05h): Supported 00:36:42.757 Write Zeroes (08h): Supported LBA-Change 00:36:42.757 Dataset Management (09h): Supported LBA-Change 00:36:42.757 Unknown (0Ch): Supported 00:36:42.757 Unknown (12h): Supported 00:36:42.757 Copy (19h): Supported LBA-Change 00:36:42.757 Unknown (1Dh): Supported LBA-Change 00:36:42.757 00:36:42.757 Error Log 00:36:42.757 ========= 00:36:42.757 00:36:42.757 Arbitration 00:36:42.757 =========== 00:36:42.757 Arbitration Burst: no limit 00:36:42.757 00:36:42.757 Power Management 00:36:42.757 ================ 00:36:42.757 Number of Power States: 1 00:36:42.757 Current Power State: Power State #0 00:36:42.757 Power State #0: 00:36:42.757 Max Power: 25.00 W 00:36:42.757 Non-Operational State: Operational 00:36:42.757 Entry Latency: 16 microseconds 00:36:42.757 Exit Latency: 4 microseconds 00:36:42.757 Relative Read Throughput: 0 00:36:42.757 Relative Read Latency: 0 00:36:42.757 Relative Write Throughput: 0 00:36:42.757 Relative Write Latency: 0 00:36:42.757 Idle Power: Not Reported 00:36:42.757 Active Power: Not Reported 00:36:42.757 Non-Operational Permissive Mode: Not Supported 00:36:42.757 00:36:42.757 Health Information 00:36:42.757 ================== 00:36:42.757 Critical Warnings: 00:36:42.757 Available Spare Space: OK 00:36:42.757 Temperature: OK 00:36:42.757 Device Reliability: OK 00:36:42.757 Read Only: No 00:36:42.757 Volatile Memory Backup: OK 00:36:42.757 Current Temperature: 323 Kelvin (50 Celsius) 00:36:42.757 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:42.757 Available Spare: 0% 00:36:42.757 Available Spare Threshold: 0% 00:36:42.757 Life Percentage Used: 0% 00:36:42.757 Data Units Read: 874 00:36:42.757 Data Units Written: 803 00:36:42.757 Host Read Commands: 37802 00:36:42.757 Host Write Commands: 37228 00:36:42.757 Controller Busy Time: 0 minutes 00:36:42.757 Power Cycles: 0 00:36:42.757 Power On Hours: 0 hours 00:36:42.757 Unsafe Shutdowns: 0 00:36:42.757 Unrecoverable Media Errors: 0 00:36:42.757 Lifetime Error Log Entries: 0 00:36:42.757 Warning Temperature Time: 0 minutes 00:36:42.757 Critical Temperature Time: 0 minutes 00:36:42.757 00:36:42.757 Number of Queues 00:36:42.757 ================ 00:36:42.757 Number of I/O Submission Queues: 64 00:36:42.757 Number of I/O Completion Queues: 64 00:36:42.757 00:36:42.757 ZNS Specific Controller Data 00:36:42.757 ============================ 00:36:42.757 Zone Append Size Limit: 0 00:36:42.757 00:36:42.757 00:36:42.757 Active Namespaces 00:36:42.757 ================= 00:36:42.757 Namespace ID:1 00:36:42.757 Error Recovery Timeout: Unlimited 00:36:42.757 Command Set Identifier: NVM (00h) 00:36:42.757 Deallocate: Supported 00:36:42.757 Deallocated/Unwritten Error: Supported 00:36:42.757 Deallocated Read Value: All 0x00 00:36:42.758 Deallocate in Write Zeroes: Not Supported 00:36:42.758 Deallocated Guard Field: 0xFFFF 00:36:42.758 Flush: Supported 00:36:42.758 Reservation: Not Supported 00:36:42.758 Namespace Sharing Capabilities: Multiple Controllers 00:36:42.758 Size (in LBAs): 262144 (1GiB) 00:36:42.758 Capacity (in LBAs): 262144 (1GiB) 00:36:42.758 Utilization (in LBAs): 262144 (1GiB) 00:36:42.758 Thin Provisioning: Not Supported 00:36:42.758 Per-NS Atomic Units: No 00:36:42.758 Maximum Single Source Range Length: 128 00:36:42.758 Maximum Copy Length: 128 00:36:42.758 Maximum Source Range Count: 128 00:36:42.758 NGUID/EUI64 Never Reused: No 00:36:42.758 Namespace Write Protected: No 00:36:42.758 Endurance group ID: 1 00:36:42.758 Number of LBA Formats: 8 00:36:42.758 Current LBA Format: LBA Format #04 00:36:42.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.758 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.758 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.758 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.758 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.758 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.758 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.758 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.758 00:36:42.758 Get Feature FDP: 00:36:42.758 ================ 00:36:42.758 Enabled: Yes 00:36:42.758 FDP configuration index: 0 00:36:42.758 00:36:42.758 FDP configurations log page 00:36:42.758 =========================== 00:36:42.758 Number of FDP configurations: 1 00:36:42.758 Version: 0 00:36:42.758 Size: 112 00:36:42.758 FDP Configuration Descriptor: 0 00:36:42.758 Descriptor Size: 96 00:36:42.758 Reclaim Group Identifier format: 2 00:36:42.758 FDP Volatile Write Cache: Not Present 00:36:42.758 FDP Configuration: Valid 00:36:42.758 Vendor Specific Size: 0 00:36:42.758 Number of Reclaim Groups: 2 00:36:42.758 Number of Recalim Unit Handles: 8 00:36:42.758 Max Placement Identifiers: 128 00:36:42.758 Number of Namespaces Suppprted: 256 00:36:42.758 Reclaim unit Nominal Size: 6000000 bytes 00:36:42.758 Estimated Reclaim Unit Time Limit: Not Reported 00:36:42.758 RUH Desc #000: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #001: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #002: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #003: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #004: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #005: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #006: RUH Type: Initially Isolated 00:36:42.758 RUH Desc #007: RUH Type: Initially Isolated 00:36:42.758 00:36:42.758 FDP reclaim unit handle usage log page 00:36:42.758 ====================================== 00:36:42.758 Number of Reclaim Unit Handles: 8 00:36:42.758 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:36:42.758 RUH Usage Desc #001: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #002: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #003: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #004: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #005: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #006: RUH Attributes: Unused 00:36:42.758 RUH Usage Desc #007: RUH Attributes: Unused 00:36:42.758 00:36:42.758 FDP statistics log page 00:36:42.758 ======================= 00:36:42.758 Host bytes with metadata written: 512663552 00:36:42.758 Med[2024-11-26 17:33:43.200030] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64159 terminated unexpected 00:36:42.758 ia bytes with metadata written: 512720896 00:36:42.758 Media bytes erased: 0 00:36:42.758 00:36:42.758 FDP events log page 00:36:42.758 =================== 00:36:42.758 Number of FDP events: 0 00:36:42.758 00:36:42.758 NVM Specific Namespace Data 00:36:42.758 =========================== 00:36:42.758 Logical Block Storage Tag Mask: 0 00:36:42.758 Protection Information Capabilities: 00:36:42.758 16b Guard Protection Information Storage Tag Support: No 00:36:42.758 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.758 Storage Tag Check Read Support: No 00:36:42.758 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.758 ===================================================== 00:36:42.758 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:42.758 ===================================================== 00:36:42.758 Controller Capabilities/Features 00:36:42.758 ================================ 00:36:42.758 Vendor ID: 1b36 00:36:42.758 Subsystem Vendor ID: 1af4 00:36:42.758 Serial Number: 12342 00:36:42.758 Model Number: QEMU NVMe Ctrl 00:36:42.758 Firmware Version: 8.0.0 00:36:42.758 Recommended Arb Burst: 6 00:36:42.758 IEEE OUI Identifier: 00 54 52 00:36:42.758 Multi-path I/O 00:36:42.758 May have multiple subsystem ports: No 00:36:42.758 May have multiple controllers: No 00:36:42.758 Associated with SR-IOV VF: No 00:36:42.758 Max Data Transfer Size: 524288 00:36:42.758 Max Number of Namespaces: 256 00:36:42.758 Max Number of I/O Queues: 64 00:36:42.758 NVMe Specification Version (VS): 1.4 00:36:42.758 NVMe Specification Version (Identify): 1.4 00:36:42.758 Maximum Queue Entries: 2048 00:36:42.758 Contiguous Queues Required: Yes 00:36:42.758 Arbitration Mechanisms Supported 00:36:42.758 Weighted Round Robin: Not Supported 00:36:42.758 Vendor Specific: Not Supported 00:36:42.758 Reset Timeout: 7500 ms 00:36:42.758 Doorbell Stride: 4 bytes 00:36:42.758 NVM Subsystem Reset: Not Supported 00:36:42.758 Command Sets Supported 00:36:42.758 NVM Command Set: Supported 00:36:42.758 Boot Partition: Not Supported 00:36:42.758 Memory Page Size Minimum: 4096 bytes 00:36:42.758 Memory Page Size Maximum: 65536 bytes 00:36:42.758 Persistent Memory Region: Not Supported 00:36:42.758 Optional Asynchronous Events Supported 00:36:42.758 Namespace Attribute Notices: Supported 00:36:42.758 Firmware Activation Notices: Not Supported 00:36:42.758 ANA Change Notices: Not Supported 00:36:42.758 PLE Aggregate Log Change Notices: Not Supported 00:36:42.758 LBA Status Info Alert Notices: Not Supported 00:36:42.758 EGE Aggregate Log Change Notices: Not Supported 00:36:42.758 Normal NVM Subsystem Shutdown event: Not Supported 00:36:42.758 Zone Descriptor Change Notices: Not Supported 00:36:42.758 Discovery Log Change Notices: Not Supported 00:36:42.758 Controller Attributes 00:36:42.758 128-bit Host Identifier: Not Supported 00:36:42.758 Non-Operational Permissive Mode: Not Supported 00:36:42.758 NVM Sets: Not Supported 00:36:42.758 Read Recovery Levels: Not Supported 00:36:42.758 Endurance Groups: Not Supported 00:36:42.758 Predictable Latency Mode: Not Supported 00:36:42.758 Traffic Based Keep ALive: Not Supported 00:36:42.758 Namespace Granularity: Not Supported 00:36:42.758 SQ Associations: Not Supported 00:36:42.758 UUID List: Not Supported 00:36:42.758 Multi-Domain Subsystem: Not Supported 00:36:42.758 Fixed Capacity Management: Not Supported 00:36:42.758 Variable Capacity Management: Not Supported 00:36:42.758 Delete Endurance Group: Not Supported 00:36:42.758 Delete NVM Set: Not Supported 00:36:42.758 Extended LBA Formats Supported: Supported 00:36:42.758 Flexible Data Placement Supported: Not Supported 00:36:42.758 00:36:42.758 Controller Memory Buffer Support 00:36:42.758 ================================ 00:36:42.759 Supported: No 00:36:42.759 00:36:42.759 Persistent Memory Region Support 00:36:42.759 ================================ 00:36:42.759 Supported: No 00:36:42.759 00:36:42.759 Admin Command Set Attributes 00:36:42.759 ============================ 00:36:42.759 Security Send/Receive: Not Supported 00:36:42.759 Format NVM: Supported 00:36:42.759 Firmware Activate/Download: Not Supported 00:36:42.759 Namespace Management: Supported 00:36:42.759 Device Self-Test: Not Supported 00:36:42.759 Directives: Supported 00:36:42.759 NVMe-MI: Not Supported 00:36:42.759 Virtualization Management: Not Supported 00:36:42.759 Doorbell Buffer Config: Supported 00:36:42.759 Get LBA Status Capability: Not Supported 00:36:42.759 Command & Feature Lockdown Capability: Not Supported 00:36:42.759 Abort Command Limit: 4 00:36:42.759 Async Event Request Limit: 4 00:36:42.759 Number of Firmware Slots: N/A 00:36:42.759 Firmware Slot 1 Read-Only: N/A 00:36:42.759 Firmware Activation Without Reset: N/A 00:36:42.759 Multiple Update Detection Support: N/A 00:36:42.759 Firmware Update Granularity: No Information Provided 00:36:42.759 Per-Namespace SMART Log: Yes 00:36:42.759 Asymmetric Namespace Access Log Page: Not Supported 00:36:42.759 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:36:42.759 Command Effects Log Page: Supported 00:36:42.759 Get Log Page Extended Data: Supported 00:36:42.759 Telemetry Log Pages: Not Supported 00:36:42.759 Persistent Event Log Pages: Not Supported 00:36:42.759 Supported Log Pages Log Page: May Support 00:36:42.759 Commands Supported & Effects Log Page: Not Supported 00:36:42.759 Feature Identifiers & Effects Log Page:May Support 00:36:42.759 NVMe-MI Commands & Effects Log Page: May Support 00:36:42.759 Data Area 4 for Telemetry Log: Not Supported 00:36:42.759 Error Log Page Entries Supported: 1 00:36:42.759 Keep Alive: Not Supported 00:36:42.759 00:36:42.759 NVM Command Set Attributes 00:36:42.759 ========================== 00:36:42.759 Submission Queue Entry Size 00:36:42.759 Max: 64 00:36:42.759 Min: 64 00:36:42.759 Completion Queue Entry Size 00:36:42.759 Max: 16 00:36:42.759 Min: 16 00:36:42.759 Number of Namespaces: 256 00:36:42.759 Compare Command: Supported 00:36:42.759 Write Uncorrectable Command: Not Supported 00:36:42.759 Dataset Management Command: Supported 00:36:42.759 Write Zeroes Command: Supported 00:36:42.759 Set Features Save Field: Supported 00:36:42.759 Reservations: Not Supported 00:36:42.759 Timestamp: Supported 00:36:42.759 Copy: Supported 00:36:42.759 Volatile Write Cache: Present 00:36:42.759 Atomic Write Unit (Normal): 1 00:36:42.759 Atomic Write Unit (PFail): 1 00:36:42.759 Atomic Compare & Write Unit: 1 00:36:42.759 Fused Compare & Write: Not Supported 00:36:42.759 Scatter-Gather List 00:36:42.759 SGL Command Set: Supported 00:36:42.759 SGL Keyed: Not Supported 00:36:42.759 SGL Bit Bucket Descriptor: Not Supported 00:36:42.759 SGL Metadata Pointer: Not Supported 00:36:42.759 Oversized SGL: Not Supported 00:36:42.759 SGL Metadata Address: Not Supported 00:36:42.759 SGL Offset: Not Supported 00:36:42.759 Transport SGL Data Block: Not Supported 00:36:42.759 Replay Protected Memory Block: Not Supported 00:36:42.759 00:36:42.759 Firmware Slot Information 00:36:42.759 ========================= 00:36:42.759 Active slot: 1 00:36:42.759 Slot 1 Firmware Revision: 1.0 00:36:42.759 00:36:42.759 00:36:42.759 Commands Supported and Effects 00:36:42.759 ============================== 00:36:42.759 Admin Commands 00:36:42.759 -------------- 00:36:42.759 Delete I/O Submission Queue (00h): Supported 00:36:42.759 Create I/O Submission Queue (01h): Supported 00:36:42.759 Get Log Page (02h): Supported 00:36:42.759 Delete I/O Completion Queue (04h): Supported 00:36:42.759 Create I/O Completion Queue (05h): Supported 00:36:42.759 Identify (06h): Supported 00:36:42.759 Abort (08h): Supported 00:36:42.759 Set Features (09h): Supported 00:36:42.759 Get Features (0Ah): Supported 00:36:42.759 Asynchronous Event Request (0Ch): Supported 00:36:42.759 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:42.759 Directive Send (19h): Supported 00:36:42.759 Directive Receive (1Ah): Supported 00:36:42.759 Virtualization Management (1Ch): Supported 00:36:42.759 Doorbell Buffer Config (7Ch): Supported 00:36:42.759 Format NVM (80h): Supported LBA-Change 00:36:42.759 I/O Commands 00:36:42.759 ------------ 00:36:42.759 Flush (00h): Supported LBA-Change 00:36:42.759 Write (01h): Supported LBA-Change 00:36:42.759 Read (02h): Supported 00:36:42.759 Compare (05h): Supported 00:36:42.759 Write Zeroes (08h): Supported LBA-Change 00:36:42.759 Dataset Management (09h): Supported LBA-Change 00:36:42.759 Unknown (0Ch): Supported 00:36:42.759 Unknown (12h): Supported 00:36:42.759 Copy (19h): Supported LBA-Change 00:36:42.759 Unknown (1Dh): Supported LBA-Change 00:36:42.759 00:36:42.759 Error Log 00:36:42.759 ========= 00:36:42.759 00:36:42.759 Arbitration 00:36:42.759 =========== 00:36:42.759 Arbitration Burst: no limit 00:36:42.759 00:36:42.759 Power Management 00:36:42.759 ================ 00:36:42.759 Number of Power States: 1 00:36:42.759 Current Power State: Power State #0 00:36:42.759 Power State #0: 00:36:42.759 Max Power: 25.00 W 00:36:42.759 Non-Operational State: Operational 00:36:42.759 Entry Latency: 16 microseconds 00:36:42.759 Exit Latency: 4 microseconds 00:36:42.759 Relative Read Throughput: 0 00:36:42.759 Relative Read Latency: 0 00:36:42.759 Relative Write Throughput: 0 00:36:42.759 Relative Write Latency: 0 00:36:42.759 Idle Power: Not Reported 00:36:42.759 Active Power: Not Reported 00:36:42.759 Non-Operational Permissive Mode: Not Supported 00:36:42.759 00:36:42.759 Health Information 00:36:42.759 ================== 00:36:42.759 Critical Warnings: 00:36:42.759 Available Spare Space: OK 00:36:42.759 Temperature: OK 00:36:42.759 Device Reliability: OK 00:36:42.759 Read Only: No 00:36:42.759 Volatile Memory Backup: OK 00:36:42.759 Current Temperature: 323 Kelvin (50 Celsius) 00:36:42.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:42.759 Available Spare: 0% 00:36:42.759 Available Spare Threshold: 0% 00:36:42.759 Life Percentage Used: 0% 00:36:42.759 Data Units Read: 2454 00:36:42.759 Data Units Written: 2241 00:36:42.759 Host Read Commands: 112049 00:36:42.759 Host Write Commands: 110318 00:36:42.759 Controller Busy Time: 0 minutes 00:36:42.759 Power Cycles: 0 00:36:42.759 Power On Hours: 0 hours 00:36:42.759 Unsafe Shutdowns: 0 00:36:42.759 Unrecoverable Media Errors: 0 00:36:42.759 Lifetime Error Log Entries: 0 00:36:42.759 Warning Temperature Time: 0 minutes 00:36:42.759 Critical Temperature Time: 0 minutes 00:36:42.759 00:36:42.759 Number of Queues 00:36:42.759 ================ 00:36:42.759 Number of I/O Submission Queues: 64 00:36:42.759 Number of I/O Completion Queues: 64 00:36:42.759 00:36:42.759 ZNS Specific Controller Data 00:36:42.759 ============================ 00:36:42.759 Zone Append Size Limit: 0 00:36:42.759 00:36:42.759 00:36:42.759 Active Namespaces 00:36:42.759 ================= 00:36:42.759 Namespace ID:1 00:36:42.759 Error Recovery Timeout: Unlimited 00:36:42.759 Command Set Identifier: NVM (00h) 00:36:42.759 Deallocate: Supported 00:36:42.759 Deallocated/Unwritten Error: Supported 00:36:42.759 Deallocated Read Value: All 0x00 00:36:42.759 Deallocate in Write Zeroes: Not Supported 00:36:42.759 Deallocated Guard Field: 0xFFFF 00:36:42.759 Flush: Supported 00:36:42.759 Reservation: Not Supported 00:36:42.759 Namespace Sharing Capabilities: Private 00:36:42.759 Size (in LBAs): 1048576 (4GiB) 00:36:42.759 Capacity (in LBAs): 1048576 (4GiB) 00:36:42.759 Utilization (in LBAs): 1048576 (4GiB) 00:36:42.759 Thin Provisioning: Not Supported 00:36:42.759 Per-NS Atomic Units: No 00:36:42.759 Maximum Single Source Range Length: 128 00:36:42.759 Maximum Copy Length: 128 00:36:42.759 Maximum Source Range Count: 128 00:36:42.759 NGUID/EUI64 Never Reused: No 00:36:42.759 Namespace Write Protected: No 00:36:42.759 Number of LBA Formats: 8 00:36:42.759 Current LBA Format: LBA Format #04 00:36:42.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.759 00:36:42.759 NVM Specific Namespace Data 00:36:42.759 =========================== 00:36:42.759 Logical Block Storage Tag Mask: 0 00:36:42.760 Protection Information Capabilities: 00:36:42.760 16b Guard Protection Information Storage Tag Support: No 00:36:42.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.760 Storage Tag Check Read Support: No 00:36:42.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Namespace ID:2 00:36:42.760 Error Recovery Timeout: Unlimited 00:36:42.760 Command Set Identifier: NVM (00h) 00:36:42.760 Deallocate: Supported 00:36:42.760 Deallocated/Unwritten Error: Supported 00:36:42.760 Deallocated Read Value: All 0x00 00:36:42.760 Deallocate in Write Zeroes: Not Supported 00:36:42.760 Deallocated Guard Field: 0xFFFF 00:36:42.760 Flush: Supported 00:36:42.760 Reservation: Not Supported 00:36:42.760 Namespace Sharing Capabilities: Private 00:36:42.760 Size (in LBAs): 1048576 (4GiB) 00:36:42.760 Capacity (in LBAs): 1048576 (4GiB) 00:36:42.760 Utilization (in LBAs): 1048576 (4GiB) 00:36:42.760 Thin Provisioning: Not Supported 00:36:42.760 Per-NS Atomic Units: No 00:36:42.760 Maximum Single Source Range Length: 128 00:36:42.760 Maximum Copy Length: 128 00:36:42.760 Maximum Source Range Count: 128 00:36:42.760 NGUID/EUI64 Never Reused: No 00:36:42.760 Namespace Write Protected: No 00:36:42.760 Number of LBA Formats: 8 00:36:42.760 Current LBA Format: LBA Format #04 00:36:42.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.760 00:36:42.760 NVM Specific Namespace Data 00:36:42.760 =========================== 00:36:42.760 Logical Block Storage Tag Mask: 0 00:36:42.760 Protection Information Capabilities: 00:36:42.760 16b Guard Protection Information Storage Tag Support: No 00:36:42.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.760 Storage Tag Check Read Support: No 00:36:42.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Namespace ID:3 00:36:42.760 Error Recovery Timeout: Unlimited 00:36:42.760 Command Set Identifier: NVM (00h) 00:36:42.760 Deallocate: Supported 00:36:42.760 Deallocated/Unwritten Error: Supported 00:36:42.760 Deallocated Read Value: All 0x00 00:36:42.760 Deallocate in Write Zeroes: Not Supported 00:36:42.760 Deallocated Guard Field: 0xFFFF 00:36:42.760 Flush: Supported 00:36:42.760 Reservation: Not Supported 00:36:42.760 Namespace Sharing Capabilities: Private 00:36:42.760 Size (in LBAs): 1048576 (4GiB) 00:36:42.760 Capacity (in LBAs): 1048576 (4GiB) 00:36:42.760 Utilization (in LBAs): 1048576 (4GiB) 00:36:42.760 Thin Provisioning: Not Supported 00:36:42.760 Per-NS Atomic Units: No 00:36:42.760 Maximum Single Source Range Length: 128 00:36:42.760 Maximum Copy Length: 128 00:36:42.760 Maximum Source Range Count: 128 00:36:42.760 NGUID/EUI64 Never Reused: No 00:36:42.760 Namespace Write Protected: No 00:36:42.760 Number of LBA Formats: 8 00:36:42.760 Current LBA Format: LBA Format #04 00:36:42.760 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:42.760 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:42.760 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:42.760 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:42.760 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:42.760 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:42.760 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:42.760 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:42.760 00:36:42.760 NVM Specific Namespace Data 00:36:42.760 =========================== 00:36:42.760 Logical Block Storage Tag Mask: 0 00:36:42.760 Protection Information Capabilities: 00:36:42.760 16b Guard Protection Information Storage Tag Support: No 00:36:42.760 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:42.760 Storage Tag Check Read Support: No 00:36:42.760 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:42.760 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:42.760 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:43.019 ===================================================== 00:36:43.019 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:43.019 ===================================================== 00:36:43.019 Controller Capabilities/Features 00:36:43.019 ================================ 00:36:43.019 Vendor ID: 1b36 00:36:43.019 Subsystem Vendor ID: 1af4 00:36:43.019 Serial Number: 12340 00:36:43.019 Model Number: QEMU NVMe Ctrl 00:36:43.019 Firmware Version: 8.0.0 00:36:43.019 Recommended Arb Burst: 6 00:36:43.019 IEEE OUI Identifier: 00 54 52 00:36:43.019 Multi-path I/O 00:36:43.019 May have multiple subsystem ports: No 00:36:43.019 May have multiple controllers: No 00:36:43.019 Associated with SR-IOV VF: No 00:36:43.019 Max Data Transfer Size: 524288 00:36:43.019 Max Number of Namespaces: 256 00:36:43.019 Max Number of I/O Queues: 64 00:36:43.019 NVMe Specification Version (VS): 1.4 00:36:43.019 NVMe Specification Version (Identify): 1.4 00:36:43.019 Maximum Queue Entries: 2048 00:36:43.019 Contiguous Queues Required: Yes 00:36:43.019 Arbitration Mechanisms Supported 00:36:43.019 Weighted Round Robin: Not Supported 00:36:43.019 Vendor Specific: Not Supported 00:36:43.019 Reset Timeout: 7500 ms 00:36:43.019 Doorbell Stride: 4 bytes 00:36:43.019 NVM Subsystem Reset: Not Supported 00:36:43.019 Command Sets Supported 00:36:43.019 NVM Command Set: Supported 00:36:43.019 Boot Partition: Not Supported 00:36:43.019 Memory Page Size Minimum: 4096 bytes 00:36:43.019 Memory Page Size Maximum: 65536 bytes 00:36:43.020 Persistent Memory Region: Not Supported 00:36:43.020 Optional Asynchronous Events Supported 00:36:43.020 Namespace Attribute Notices: Supported 00:36:43.020 Firmware Activation Notices: Not Supported 00:36:43.020 ANA Change Notices: Not Supported 00:36:43.020 PLE Aggregate Log Change Notices: Not Supported 00:36:43.020 LBA Status Info Alert Notices: Not Supported 00:36:43.020 EGE Aggregate Log Change Notices: Not Supported 00:36:43.020 Normal NVM Subsystem Shutdown event: Not Supported 00:36:43.020 Zone Descriptor Change Notices: Not Supported 00:36:43.020 Discovery Log Change Notices: Not Supported 00:36:43.020 Controller Attributes 00:36:43.020 128-bit Host Identifier: Not Supported 00:36:43.020 Non-Operational Permissive Mode: Not Supported 00:36:43.020 NVM Sets: Not Supported 00:36:43.020 Read Recovery Levels: Not Supported 00:36:43.020 Endurance Groups: Not Supported 00:36:43.020 Predictable Latency Mode: Not Supported 00:36:43.020 Traffic Based Keep ALive: Not Supported 00:36:43.020 Namespace Granularity: Not Supported 00:36:43.020 SQ Associations: Not Supported 00:36:43.020 UUID List: Not Supported 00:36:43.020 Multi-Domain Subsystem: Not Supported 00:36:43.020 Fixed Capacity Management: Not Supported 00:36:43.020 Variable Capacity Management: Not Supported 00:36:43.020 Delete Endurance Group: Not Supported 00:36:43.020 Delete NVM Set: Not Supported 00:36:43.020 Extended LBA Formats Supported: Supported 00:36:43.020 Flexible Data Placement Supported: Not Supported 00:36:43.020 00:36:43.020 Controller Memory Buffer Support 00:36:43.020 ================================ 00:36:43.020 Supported: No 00:36:43.020 00:36:43.020 Persistent Memory Region Support 00:36:43.020 ================================ 00:36:43.020 Supported: No 00:36:43.020 00:36:43.020 Admin Command Set Attributes 00:36:43.020 ============================ 00:36:43.020 Security Send/Receive: Not Supported 00:36:43.020 Format NVM: Supported 00:36:43.020 Firmware Activate/Download: Not Supported 00:36:43.020 Namespace Management: Supported 00:36:43.020 Device Self-Test: Not Supported 00:36:43.020 Directives: Supported 00:36:43.020 NVMe-MI: Not Supported 00:36:43.020 Virtualization Management: Not Supported 00:36:43.020 Doorbell Buffer Config: Supported 00:36:43.020 Get LBA Status Capability: Not Supported 00:36:43.020 Command & Feature Lockdown Capability: Not Supported 00:36:43.020 Abort Command Limit: 4 00:36:43.020 Async Event Request Limit: 4 00:36:43.020 Number of Firmware Slots: N/A 00:36:43.020 Firmware Slot 1 Read-Only: N/A 00:36:43.020 Firmware Activation Without Reset: N/A 00:36:43.020 Multiple Update Detection Support: N/A 00:36:43.020 Firmware Update Granularity: No Information Provided 00:36:43.020 Per-Namespace SMART Log: Yes 00:36:43.020 Asymmetric Namespace Access Log Page: Not Supported 00:36:43.020 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:36:43.020 Command Effects Log Page: Supported 00:36:43.020 Get Log Page Extended Data: Supported 00:36:43.020 Telemetry Log Pages: Not Supported 00:36:43.020 Persistent Event Log Pages: Not Supported 00:36:43.020 Supported Log Pages Log Page: May Support 00:36:43.020 Commands Supported & Effects Log Page: Not Supported 00:36:43.020 Feature Identifiers & Effects Log Page:May Support 00:36:43.020 NVMe-MI Commands & Effects Log Page: May Support 00:36:43.020 Data Area 4 for Telemetry Log: Not Supported 00:36:43.020 Error Log Page Entries Supported: 1 00:36:43.020 Keep Alive: Not Supported 00:36:43.020 00:36:43.020 NVM Command Set Attributes 00:36:43.020 ========================== 00:36:43.020 Submission Queue Entry Size 00:36:43.020 Max: 64 00:36:43.020 Min: 64 00:36:43.020 Completion Queue Entry Size 00:36:43.020 Max: 16 00:36:43.020 Min: 16 00:36:43.020 Number of Namespaces: 256 00:36:43.020 Compare Command: Supported 00:36:43.020 Write Uncorrectable Command: Not Supported 00:36:43.020 Dataset Management Command: Supported 00:36:43.020 Write Zeroes Command: Supported 00:36:43.020 Set Features Save Field: Supported 00:36:43.020 Reservations: Not Supported 00:36:43.020 Timestamp: Supported 00:36:43.020 Copy: Supported 00:36:43.020 Volatile Write Cache: Present 00:36:43.020 Atomic Write Unit (Normal): 1 00:36:43.020 Atomic Write Unit (PFail): 1 00:36:43.020 Atomic Compare & Write Unit: 1 00:36:43.020 Fused Compare & Write: Not Supported 00:36:43.020 Scatter-Gather List 00:36:43.020 SGL Command Set: Supported 00:36:43.020 SGL Keyed: Not Supported 00:36:43.020 SGL Bit Bucket Descriptor: Not Supported 00:36:43.020 SGL Metadata Pointer: Not Supported 00:36:43.020 Oversized SGL: Not Supported 00:36:43.020 SGL Metadata Address: Not Supported 00:36:43.020 SGL Offset: Not Supported 00:36:43.020 Transport SGL Data Block: Not Supported 00:36:43.020 Replay Protected Memory Block: Not Supported 00:36:43.020 00:36:43.020 Firmware Slot Information 00:36:43.020 ========================= 00:36:43.020 Active slot: 1 00:36:43.020 Slot 1 Firmware Revision: 1.0 00:36:43.020 00:36:43.020 00:36:43.020 Commands Supported and Effects 00:36:43.020 ============================== 00:36:43.020 Admin Commands 00:36:43.020 -------------- 00:36:43.020 Delete I/O Submission Queue (00h): Supported 00:36:43.020 Create I/O Submission Queue (01h): Supported 00:36:43.020 Get Log Page (02h): Supported 00:36:43.020 Delete I/O Completion Queue (04h): Supported 00:36:43.020 Create I/O Completion Queue (05h): Supported 00:36:43.020 Identify (06h): Supported 00:36:43.020 Abort (08h): Supported 00:36:43.020 Set Features (09h): Supported 00:36:43.020 Get Features (0Ah): Supported 00:36:43.020 Asynchronous Event Request (0Ch): Supported 00:36:43.020 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:43.020 Directive Send (19h): Supported 00:36:43.020 Directive Receive (1Ah): Supported 00:36:43.020 Virtualization Management (1Ch): Supported 00:36:43.020 Doorbell Buffer Config (7Ch): Supported 00:36:43.020 Format NVM (80h): Supported LBA-Change 00:36:43.020 I/O Commands 00:36:43.020 ------------ 00:36:43.020 Flush (00h): Supported LBA-Change 00:36:43.020 Write (01h): Supported LBA-Change 00:36:43.020 Read (02h): Supported 00:36:43.020 Compare (05h): Supported 00:36:43.020 Write Zeroes (08h): Supported LBA-Change 00:36:43.020 Dataset Management (09h): Supported LBA-Change 00:36:43.020 Unknown (0Ch): Supported 00:36:43.020 Unknown (12h): Supported 00:36:43.020 Copy (19h): Supported LBA-Change 00:36:43.020 Unknown (1Dh): Supported LBA-Change 00:36:43.020 00:36:43.020 Error Log 00:36:43.020 ========= 00:36:43.020 00:36:43.020 Arbitration 00:36:43.020 =========== 00:36:43.020 Arbitration Burst: no limit 00:36:43.020 00:36:43.020 Power Management 00:36:43.020 ================ 00:36:43.020 Number of Power States: 1 00:36:43.020 Current Power State: Power State #0 00:36:43.020 Power State #0: 00:36:43.020 Max Power: 25.00 W 00:36:43.020 Non-Operational State: Operational 00:36:43.020 Entry Latency: 16 microseconds 00:36:43.020 Exit Latency: 4 microseconds 00:36:43.020 Relative Read Throughput: 0 00:36:43.020 Relative Read Latency: 0 00:36:43.020 Relative Write Throughput: 0 00:36:43.020 Relative Write Latency: 0 00:36:43.020 Idle Power: Not Reported 00:36:43.020 Active Power: Not Reported 00:36:43.020 Non-Operational Permissive Mode: Not Supported 00:36:43.020 00:36:43.020 Health Information 00:36:43.020 ================== 00:36:43.020 Critical Warnings: 00:36:43.020 Available Spare Space: OK 00:36:43.020 Temperature: OK 00:36:43.020 Device Reliability: OK 00:36:43.020 Read Only: No 00:36:43.020 Volatile Memory Backup: OK 00:36:43.020 Current Temperature: 323 Kelvin (50 Celsius) 00:36:43.020 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:43.020 Available Spare: 0% 00:36:43.020 Available Spare Threshold: 0% 00:36:43.020 Life Percentage Used: 0% 00:36:43.020 Data Units Read: 753 00:36:43.020 Data Units Written: 682 00:36:43.020 Host Read Commands: 36685 00:36:43.020 Host Write Commands: 36471 00:36:43.020 Controller Busy Time: 0 minutes 00:36:43.020 Power Cycles: 0 00:36:43.020 Power On Hours: 0 hours 00:36:43.020 Unsafe Shutdowns: 0 00:36:43.020 Unrecoverable Media Errors: 0 00:36:43.020 Lifetime Error Log Entries: 0 00:36:43.020 Warning Temperature Time: 0 minutes 00:36:43.020 Critical Temperature Time: 0 minutes 00:36:43.020 00:36:43.020 Number of Queues 00:36:43.020 ================ 00:36:43.020 Number of I/O Submission Queues: 64 00:36:43.020 Number of I/O Completion Queues: 64 00:36:43.020 00:36:43.020 ZNS Specific Controller Data 00:36:43.020 ============================ 00:36:43.020 Zone Append Size Limit: 0 00:36:43.020 00:36:43.020 00:36:43.020 Active Namespaces 00:36:43.020 ================= 00:36:43.020 Namespace ID:1 00:36:43.020 Error Recovery Timeout: Unlimited 00:36:43.021 Command Set Identifier: NVM (00h) 00:36:43.021 Deallocate: Supported 00:36:43.021 Deallocated/Unwritten Error: Supported 00:36:43.021 Deallocated Read Value: All 0x00 00:36:43.021 Deallocate in Write Zeroes: Not Supported 00:36:43.021 Deallocated Guard Field: 0xFFFF 00:36:43.021 Flush: Supported 00:36:43.021 Reservation: Not Supported 00:36:43.021 Metadata Transferred as: Separate Metadata Buffer 00:36:43.021 Namespace Sharing Capabilities: Private 00:36:43.021 Size (in LBAs): 1548666 (5GiB) 00:36:43.021 Capacity (in LBAs): 1548666 (5GiB) 00:36:43.021 Utilization (in LBAs): 1548666 (5GiB) 00:36:43.021 Thin Provisioning: Not Supported 00:36:43.021 Per-NS Atomic Units: No 00:36:43.021 Maximum Single Source Range Length: 128 00:36:43.021 Maximum Copy Length: 128 00:36:43.021 Maximum Source Range Count: 128 00:36:43.021 NGUID/EUI64 Never Reused: No 00:36:43.021 Namespace Write Protected: No 00:36:43.021 Number of LBA Formats: 8 00:36:43.021 Current LBA Format: LBA Format #07 00:36:43.021 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.021 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:43.021 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:43.021 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:43.021 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:43.021 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:43.021 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:43.021 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:43.021 00:36:43.021 NVM Specific Namespace Data 00:36:43.021 =========================== 00:36:43.021 Logical Block Storage Tag Mask: 0 00:36:43.021 Protection Information Capabilities: 00:36:43.021 16b Guard Protection Information Storage Tag Support: No 00:36:43.021 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:43.021 Storage Tag Check Read Support: No 00:36:43.021 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.021 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:43.021 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:36:43.280 ===================================================== 00:36:43.280 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:43.280 ===================================================== 00:36:43.280 Controller Capabilities/Features 00:36:43.280 ================================ 00:36:43.280 Vendor ID: 1b36 00:36:43.280 Subsystem Vendor ID: 1af4 00:36:43.280 Serial Number: 12341 00:36:43.280 Model Number: QEMU NVMe Ctrl 00:36:43.280 Firmware Version: 8.0.0 00:36:43.280 Recommended Arb Burst: 6 00:36:43.280 IEEE OUI Identifier: 00 54 52 00:36:43.280 Multi-path I/O 00:36:43.280 May have multiple subsystem ports: No 00:36:43.280 May have multiple controllers: No 00:36:43.280 Associated with SR-IOV VF: No 00:36:43.280 Max Data Transfer Size: 524288 00:36:43.280 Max Number of Namespaces: 256 00:36:43.280 Max Number of I/O Queues: 64 00:36:43.280 NVMe Specification Version (VS): 1.4 00:36:43.280 NVMe Specification Version (Identify): 1.4 00:36:43.280 Maximum Queue Entries: 2048 00:36:43.280 Contiguous Queues Required: Yes 00:36:43.280 Arbitration Mechanisms Supported 00:36:43.280 Weighted Round Robin: Not Supported 00:36:43.280 Vendor Specific: Not Supported 00:36:43.280 Reset Timeout: 7500 ms 00:36:43.280 Doorbell Stride: 4 bytes 00:36:43.280 NVM Subsystem Reset: Not Supported 00:36:43.280 Command Sets Supported 00:36:43.280 NVM Command Set: Supported 00:36:43.280 Boot Partition: Not Supported 00:36:43.280 Memory Page Size Minimum: 4096 bytes 00:36:43.280 Memory Page Size Maximum: 65536 bytes 00:36:43.280 Persistent Memory Region: Not Supported 00:36:43.280 Optional Asynchronous Events Supported 00:36:43.280 Namespace Attribute Notices: Supported 00:36:43.280 Firmware Activation Notices: Not Supported 00:36:43.280 ANA Change Notices: Not Supported 00:36:43.280 PLE Aggregate Log Change Notices: Not Supported 00:36:43.280 LBA Status Info Alert Notices: Not Supported 00:36:43.280 EGE Aggregate Log Change Notices: Not Supported 00:36:43.280 Normal NVM Subsystem Shutdown event: Not Supported 00:36:43.280 Zone Descriptor Change Notices: Not Supported 00:36:43.280 Discovery Log Change Notices: Not Supported 00:36:43.280 Controller Attributes 00:36:43.280 128-bit Host Identifier: Not Supported 00:36:43.280 Non-Operational Permissive Mode: Not Supported 00:36:43.280 NVM Sets: Not Supported 00:36:43.280 Read Recovery Levels: Not Supported 00:36:43.280 Endurance Groups: Not Supported 00:36:43.280 Predictable Latency Mode: Not Supported 00:36:43.280 Traffic Based Keep ALive: Not Supported 00:36:43.280 Namespace Granularity: Not Supported 00:36:43.280 SQ Associations: Not Supported 00:36:43.280 UUID List: Not Supported 00:36:43.280 Multi-Domain Subsystem: Not Supported 00:36:43.280 Fixed Capacity Management: Not Supported 00:36:43.280 Variable Capacity Management: Not Supported 00:36:43.280 Delete Endurance Group: Not Supported 00:36:43.280 Delete NVM Set: Not Supported 00:36:43.280 Extended LBA Formats Supported: Supported 00:36:43.280 Flexible Data Placement Supported: Not Supported 00:36:43.280 00:36:43.280 Controller Memory Buffer Support 00:36:43.280 ================================ 00:36:43.280 Supported: No 00:36:43.280 00:36:43.280 Persistent Memory Region Support 00:36:43.280 ================================ 00:36:43.280 Supported: No 00:36:43.280 00:36:43.280 Admin Command Set Attributes 00:36:43.280 ============================ 00:36:43.280 Security Send/Receive: Not Supported 00:36:43.280 Format NVM: Supported 00:36:43.280 Firmware Activate/Download: Not Supported 00:36:43.280 Namespace Management: Supported 00:36:43.280 Device Self-Test: Not Supported 00:36:43.280 Directives: Supported 00:36:43.280 NVMe-MI: Not Supported 00:36:43.280 Virtualization Management: Not Supported 00:36:43.280 Doorbell Buffer Config: Supported 00:36:43.280 Get LBA Status Capability: Not Supported 00:36:43.280 Command & Feature Lockdown Capability: Not Supported 00:36:43.280 Abort Command Limit: 4 00:36:43.280 Async Event Request Limit: 4 00:36:43.280 Number of Firmware Slots: N/A 00:36:43.280 Firmware Slot 1 Read-Only: N/A 00:36:43.280 Firmware Activation Without Reset: N/A 00:36:43.280 Multiple Update Detection Support: N/A 00:36:43.280 Firmware Update Granularity: No Information Provided 00:36:43.281 Per-Namespace SMART Log: Yes 00:36:43.281 Asymmetric Namespace Access Log Page: Not Supported 00:36:43.281 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:36:43.281 Command Effects Log Page: Supported 00:36:43.281 Get Log Page Extended Data: Supported 00:36:43.281 Telemetry Log Pages: Not Supported 00:36:43.281 Persistent Event Log Pages: Not Supported 00:36:43.281 Supported Log Pages Log Page: May Support 00:36:43.281 Commands Supported & Effects Log Page: Not Supported 00:36:43.281 Feature Identifiers & Effects Log Page:May Support 00:36:43.281 NVMe-MI Commands & Effects Log Page: May Support 00:36:43.281 Data Area 4 for Telemetry Log: Not Supported 00:36:43.281 Error Log Page Entries Supported: 1 00:36:43.281 Keep Alive: Not Supported 00:36:43.281 00:36:43.281 NVM Command Set Attributes 00:36:43.281 ========================== 00:36:43.281 Submission Queue Entry Size 00:36:43.281 Max: 64 00:36:43.281 Min: 64 00:36:43.281 Completion Queue Entry Size 00:36:43.281 Max: 16 00:36:43.281 Min: 16 00:36:43.281 Number of Namespaces: 256 00:36:43.281 Compare Command: Supported 00:36:43.281 Write Uncorrectable Command: Not Supported 00:36:43.281 Dataset Management Command: Supported 00:36:43.281 Write Zeroes Command: Supported 00:36:43.281 Set Features Save Field: Supported 00:36:43.281 Reservations: Not Supported 00:36:43.281 Timestamp: Supported 00:36:43.281 Copy: Supported 00:36:43.281 Volatile Write Cache: Present 00:36:43.281 Atomic Write Unit (Normal): 1 00:36:43.281 Atomic Write Unit (PFail): 1 00:36:43.281 Atomic Compare & Write Unit: 1 00:36:43.281 Fused Compare & Write: Not Supported 00:36:43.281 Scatter-Gather List 00:36:43.281 SGL Command Set: Supported 00:36:43.281 SGL Keyed: Not Supported 00:36:43.281 SGL Bit Bucket Descriptor: Not Supported 00:36:43.281 SGL Metadata Pointer: Not Supported 00:36:43.281 Oversized SGL: Not Supported 00:36:43.281 SGL Metadata Address: Not Supported 00:36:43.281 SGL Offset: Not Supported 00:36:43.281 Transport SGL Data Block: Not Supported 00:36:43.281 Replay Protected Memory Block: Not Supported 00:36:43.281 00:36:43.281 Firmware Slot Information 00:36:43.281 ========================= 00:36:43.281 Active slot: 1 00:36:43.281 Slot 1 Firmware Revision: 1.0 00:36:43.281 00:36:43.281 00:36:43.281 Commands Supported and Effects 00:36:43.281 ============================== 00:36:43.281 Admin Commands 00:36:43.281 -------------- 00:36:43.281 Delete I/O Submission Queue (00h): Supported 00:36:43.281 Create I/O Submission Queue (01h): Supported 00:36:43.281 Get Log Page (02h): Supported 00:36:43.281 Delete I/O Completion Queue (04h): Supported 00:36:43.281 Create I/O Completion Queue (05h): Supported 00:36:43.281 Identify (06h): Supported 00:36:43.281 Abort (08h): Supported 00:36:43.281 Set Features (09h): Supported 00:36:43.281 Get Features (0Ah): Supported 00:36:43.281 Asynchronous Event Request (0Ch): Supported 00:36:43.281 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:43.281 Directive Send (19h): Supported 00:36:43.281 Directive Receive (1Ah): Supported 00:36:43.281 Virtualization Management (1Ch): Supported 00:36:43.281 Doorbell Buffer Config (7Ch): Supported 00:36:43.281 Format NVM (80h): Supported LBA-Change 00:36:43.281 I/O Commands 00:36:43.281 ------------ 00:36:43.281 Flush (00h): Supported LBA-Change 00:36:43.281 Write (01h): Supported LBA-Change 00:36:43.281 Read (02h): Supported 00:36:43.281 Compare (05h): Supported 00:36:43.281 Write Zeroes (08h): Supported LBA-Change 00:36:43.281 Dataset Management (09h): Supported LBA-Change 00:36:43.281 Unknown (0Ch): Supported 00:36:43.281 Unknown (12h): Supported 00:36:43.281 Copy (19h): Supported LBA-Change 00:36:43.281 Unknown (1Dh): Supported LBA-Change 00:36:43.281 00:36:43.281 Error Log 00:36:43.281 ========= 00:36:43.281 00:36:43.281 Arbitration 00:36:43.281 =========== 00:36:43.281 Arbitration Burst: no limit 00:36:43.281 00:36:43.281 Power Management 00:36:43.281 ================ 00:36:43.281 Number of Power States: 1 00:36:43.281 Current Power State: Power State #0 00:36:43.281 Power State #0: 00:36:43.281 Max Power: 25.00 W 00:36:43.281 Non-Operational State: Operational 00:36:43.281 Entry Latency: 16 microseconds 00:36:43.281 Exit Latency: 4 microseconds 00:36:43.281 Relative Read Throughput: 0 00:36:43.281 Relative Read Latency: 0 00:36:43.281 Relative Write Throughput: 0 00:36:43.281 Relative Write Latency: 0 00:36:43.281 Idle Power: Not Reported 00:36:43.281 Active Power: Not Reported 00:36:43.281 Non-Operational Permissive Mode: Not Supported 00:36:43.281 00:36:43.281 Health Information 00:36:43.281 ================== 00:36:43.281 Critical Warnings: 00:36:43.281 Available Spare Space: OK 00:36:43.281 Temperature: OK 00:36:43.281 Device Reliability: OK 00:36:43.281 Read Only: No 00:36:43.281 Volatile Memory Backup: OK 00:36:43.281 Current Temperature: 323 Kelvin (50 Celsius) 00:36:43.281 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:43.281 Available Spare: 0% 00:36:43.281 Available Spare Threshold: 0% 00:36:43.281 Life Percentage Used: 0% 00:36:43.281 Data Units Read: 1189 00:36:43.281 Data Units Written: 1055 00:36:43.281 Host Read Commands: 54955 00:36:43.281 Host Write Commands: 53734 00:36:43.281 Controller Busy Time: 0 minutes 00:36:43.281 Power Cycles: 0 00:36:43.281 Power On Hours: 0 hours 00:36:43.281 Unsafe Shutdowns: 0 00:36:43.281 Unrecoverable Media Errors: 0 00:36:43.281 Lifetime Error Log Entries: 0 00:36:43.281 Warning Temperature Time: 0 minutes 00:36:43.281 Critical Temperature Time: 0 minutes 00:36:43.281 00:36:43.281 Number of Queues 00:36:43.281 ================ 00:36:43.281 Number of I/O Submission Queues: 64 00:36:43.281 Number of I/O Completion Queues: 64 00:36:43.281 00:36:43.281 ZNS Specific Controller Data 00:36:43.281 ============================ 00:36:43.281 Zone Append Size Limit: 0 00:36:43.281 00:36:43.281 00:36:43.281 Active Namespaces 00:36:43.281 ================= 00:36:43.281 Namespace ID:1 00:36:43.281 Error Recovery Timeout: Unlimited 00:36:43.281 Command Set Identifier: NVM (00h) 00:36:43.281 Deallocate: Supported 00:36:43.281 Deallocated/Unwritten Error: Supported 00:36:43.281 Deallocated Read Value: All 0x00 00:36:43.281 Deallocate in Write Zeroes: Not Supported 00:36:43.281 Deallocated Guard Field: 0xFFFF 00:36:43.281 Flush: Supported 00:36:43.281 Reservation: Not Supported 00:36:43.281 Namespace Sharing Capabilities: Private 00:36:43.281 Size (in LBAs): 1310720 (5GiB) 00:36:43.281 Capacity (in LBAs): 1310720 (5GiB) 00:36:43.281 Utilization (in LBAs): 1310720 (5GiB) 00:36:43.281 Thin Provisioning: Not Supported 00:36:43.281 Per-NS Atomic Units: No 00:36:43.281 Maximum Single Source Range Length: 128 00:36:43.281 Maximum Copy Length: 128 00:36:43.281 Maximum Source Range Count: 128 00:36:43.281 NGUID/EUI64 Never Reused: No 00:36:43.281 Namespace Write Protected: No 00:36:43.281 Number of LBA Formats: 8 00:36:43.281 Current LBA Format: LBA Format #04 00:36:43.281 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.281 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:43.281 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:43.281 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:43.281 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:43.281 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:43.281 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:43.281 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:43.281 00:36:43.281 NVM Specific Namespace Data 00:36:43.281 =========================== 00:36:43.281 Logical Block Storage Tag Mask: 0 00:36:43.281 Protection Information Capabilities: 00:36:43.281 16b Guard Protection Information Storage Tag Support: No 00:36:43.281 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:43.281 Storage Tag Check Read Support: No 00:36:43.281 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.281 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:43.281 17:33:43 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:36:43.541 ===================================================== 00:36:43.541 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:43.541 ===================================================== 00:36:43.541 Controller Capabilities/Features 00:36:43.541 ================================ 00:36:43.541 Vendor ID: 1b36 00:36:43.541 Subsystem Vendor ID: 1af4 00:36:43.541 Serial Number: 12342 00:36:43.541 Model Number: QEMU NVMe Ctrl 00:36:43.541 Firmware Version: 8.0.0 00:36:43.541 Recommended Arb Burst: 6 00:36:43.541 IEEE OUI Identifier: 00 54 52 00:36:43.541 Multi-path I/O 00:36:43.541 May have multiple subsystem ports: No 00:36:43.541 May have multiple controllers: No 00:36:43.541 Associated with SR-IOV VF: No 00:36:43.541 Max Data Transfer Size: 524288 00:36:43.541 Max Number of Namespaces: 256 00:36:43.541 Max Number of I/O Queues: 64 00:36:43.541 NVMe Specification Version (VS): 1.4 00:36:43.541 NVMe Specification Version (Identify): 1.4 00:36:43.541 Maximum Queue Entries: 2048 00:36:43.541 Contiguous Queues Required: Yes 00:36:43.541 Arbitration Mechanisms Supported 00:36:43.541 Weighted Round Robin: Not Supported 00:36:43.541 Vendor Specific: Not Supported 00:36:43.541 Reset Timeout: 7500 ms 00:36:43.541 Doorbell Stride: 4 bytes 00:36:43.541 NVM Subsystem Reset: Not Supported 00:36:43.541 Command Sets Supported 00:36:43.541 NVM Command Set: Supported 00:36:43.541 Boot Partition: Not Supported 00:36:43.541 Memory Page Size Minimum: 4096 bytes 00:36:43.541 Memory Page Size Maximum: 65536 bytes 00:36:43.541 Persistent Memory Region: Not Supported 00:36:43.541 Optional Asynchronous Events Supported 00:36:43.541 Namespace Attribute Notices: Supported 00:36:43.541 Firmware Activation Notices: Not Supported 00:36:43.541 ANA Change Notices: Not Supported 00:36:43.541 PLE Aggregate Log Change Notices: Not Supported 00:36:43.541 LBA Status Info Alert Notices: Not Supported 00:36:43.541 EGE Aggregate Log Change Notices: Not Supported 00:36:43.541 Normal NVM Subsystem Shutdown event: Not Supported 00:36:43.541 Zone Descriptor Change Notices: Not Supported 00:36:43.541 Discovery Log Change Notices: Not Supported 00:36:43.541 Controller Attributes 00:36:43.541 128-bit Host Identifier: Not Supported 00:36:43.541 Non-Operational Permissive Mode: Not Supported 00:36:43.541 NVM Sets: Not Supported 00:36:43.541 Read Recovery Levels: Not Supported 00:36:43.541 Endurance Groups: Not Supported 00:36:43.541 Predictable Latency Mode: Not Supported 00:36:43.541 Traffic Based Keep ALive: Not Supported 00:36:43.541 Namespace Granularity: Not Supported 00:36:43.541 SQ Associations: Not Supported 00:36:43.541 UUID List: Not Supported 00:36:43.541 Multi-Domain Subsystem: Not Supported 00:36:43.541 Fixed Capacity Management: Not Supported 00:36:43.541 Variable Capacity Management: Not Supported 00:36:43.541 Delete Endurance Group: Not Supported 00:36:43.541 Delete NVM Set: Not Supported 00:36:43.541 Extended LBA Formats Supported: Supported 00:36:43.541 Flexible Data Placement Supported: Not Supported 00:36:43.541 00:36:43.541 Controller Memory Buffer Support 00:36:43.541 ================================ 00:36:43.541 Supported: No 00:36:43.541 00:36:43.541 Persistent Memory Region Support 00:36:43.541 ================================ 00:36:43.541 Supported: No 00:36:43.541 00:36:43.541 Admin Command Set Attributes 00:36:43.541 ============================ 00:36:43.541 Security Send/Receive: Not Supported 00:36:43.541 Format NVM: Supported 00:36:43.541 Firmware Activate/Download: Not Supported 00:36:43.541 Namespace Management: Supported 00:36:43.541 Device Self-Test: Not Supported 00:36:43.541 Directives: Supported 00:36:43.541 NVMe-MI: Not Supported 00:36:43.541 Virtualization Management: Not Supported 00:36:43.541 Doorbell Buffer Config: Supported 00:36:43.541 Get LBA Status Capability: Not Supported 00:36:43.541 Command & Feature Lockdown Capability: Not Supported 00:36:43.541 Abort Command Limit: 4 00:36:43.541 Async Event Request Limit: 4 00:36:43.541 Number of Firmware Slots: N/A 00:36:43.541 Firmware Slot 1 Read-Only: N/A 00:36:43.541 Firmware Activation Without Reset: N/A 00:36:43.541 Multiple Update Detection Support: N/A 00:36:43.542 Firmware Update Granularity: No Information Provided 00:36:43.542 Per-Namespace SMART Log: Yes 00:36:43.542 Asymmetric Namespace Access Log Page: Not Supported 00:36:43.542 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:36:43.542 Command Effects Log Page: Supported 00:36:43.542 Get Log Page Extended Data: Supported 00:36:43.542 Telemetry Log Pages: Not Supported 00:36:43.542 Persistent Event Log Pages: Not Supported 00:36:43.542 Supported Log Pages Log Page: May Support 00:36:43.542 Commands Supported & Effects Log Page: Not Supported 00:36:43.542 Feature Identifiers & Effects Log Page:May Support 00:36:43.542 NVMe-MI Commands & Effects Log Page: May Support 00:36:43.542 Data Area 4 for Telemetry Log: Not Supported 00:36:43.542 Error Log Page Entries Supported: 1 00:36:43.542 Keep Alive: Not Supported 00:36:43.542 00:36:43.542 NVM Command Set Attributes 00:36:43.542 ========================== 00:36:43.542 Submission Queue Entry Size 00:36:43.542 Max: 64 00:36:43.542 Min: 64 00:36:43.542 Completion Queue Entry Size 00:36:43.542 Max: 16 00:36:43.542 Min: 16 00:36:43.542 Number of Namespaces: 256 00:36:43.542 Compare Command: Supported 00:36:43.542 Write Uncorrectable Command: Not Supported 00:36:43.542 Dataset Management Command: Supported 00:36:43.542 Write Zeroes Command: Supported 00:36:43.542 Set Features Save Field: Supported 00:36:43.542 Reservations: Not Supported 00:36:43.542 Timestamp: Supported 00:36:43.542 Copy: Supported 00:36:43.542 Volatile Write Cache: Present 00:36:43.542 Atomic Write Unit (Normal): 1 00:36:43.542 Atomic Write Unit (PFail): 1 00:36:43.542 Atomic Compare & Write Unit: 1 00:36:43.542 Fused Compare & Write: Not Supported 00:36:43.542 Scatter-Gather List 00:36:43.542 SGL Command Set: Supported 00:36:43.542 SGL Keyed: Not Supported 00:36:43.542 SGL Bit Bucket Descriptor: Not Supported 00:36:43.542 SGL Metadata Pointer: Not Supported 00:36:43.542 Oversized SGL: Not Supported 00:36:43.542 SGL Metadata Address: Not Supported 00:36:43.542 SGL Offset: Not Supported 00:36:43.542 Transport SGL Data Block: Not Supported 00:36:43.542 Replay Protected Memory Block: Not Supported 00:36:43.542 00:36:43.542 Firmware Slot Information 00:36:43.542 ========================= 00:36:43.542 Active slot: 1 00:36:43.542 Slot 1 Firmware Revision: 1.0 00:36:43.542 00:36:43.542 00:36:43.542 Commands Supported and Effects 00:36:43.542 ============================== 00:36:43.542 Admin Commands 00:36:43.542 -------------- 00:36:43.542 Delete I/O Submission Queue (00h): Supported 00:36:43.542 Create I/O Submission Queue (01h): Supported 00:36:43.542 Get Log Page (02h): Supported 00:36:43.542 Delete I/O Completion Queue (04h): Supported 00:36:43.542 Create I/O Completion Queue (05h): Supported 00:36:43.542 Identify (06h): Supported 00:36:43.542 Abort (08h): Supported 00:36:43.542 Set Features (09h): Supported 00:36:43.542 Get Features (0Ah): Supported 00:36:43.542 Asynchronous Event Request (0Ch): Supported 00:36:43.542 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:43.542 Directive Send (19h): Supported 00:36:43.542 Directive Receive (1Ah): Supported 00:36:43.542 Virtualization Management (1Ch): Supported 00:36:43.542 Doorbell Buffer Config (7Ch): Supported 00:36:43.542 Format NVM (80h): Supported LBA-Change 00:36:43.542 I/O Commands 00:36:43.542 ------------ 00:36:43.542 Flush (00h): Supported LBA-Change 00:36:43.542 Write (01h): Supported LBA-Change 00:36:43.542 Read (02h): Supported 00:36:43.542 Compare (05h): Supported 00:36:43.542 Write Zeroes (08h): Supported LBA-Change 00:36:43.542 Dataset Management (09h): Supported LBA-Change 00:36:43.542 Unknown (0Ch): Supported 00:36:43.542 Unknown (12h): Supported 00:36:43.542 Copy (19h): Supported LBA-Change 00:36:43.542 Unknown (1Dh): Supported LBA-Change 00:36:43.542 00:36:43.542 Error Log 00:36:43.542 ========= 00:36:43.542 00:36:43.542 Arbitration 00:36:43.542 =========== 00:36:43.542 Arbitration Burst: no limit 00:36:43.542 00:36:43.542 Power Management 00:36:43.542 ================ 00:36:43.542 Number of Power States: 1 00:36:43.542 Current Power State: Power State #0 00:36:43.542 Power State #0: 00:36:43.542 Max Power: 25.00 W 00:36:43.542 Non-Operational State: Operational 00:36:43.542 Entry Latency: 16 microseconds 00:36:43.542 Exit Latency: 4 microseconds 00:36:43.542 Relative Read Throughput: 0 00:36:43.542 Relative Read Latency: 0 00:36:43.542 Relative Write Throughput: 0 00:36:43.542 Relative Write Latency: 0 00:36:43.542 Idle Power: Not Reported 00:36:43.542 Active Power: Not Reported 00:36:43.542 Non-Operational Permissive Mode: Not Supported 00:36:43.542 00:36:43.542 Health Information 00:36:43.542 ================== 00:36:43.542 Critical Warnings: 00:36:43.542 Available Spare Space: OK 00:36:43.542 Temperature: OK 00:36:43.542 Device Reliability: OK 00:36:43.542 Read Only: No 00:36:43.542 Volatile Memory Backup: OK 00:36:43.542 Current Temperature: 323 Kelvin (50 Celsius) 00:36:43.542 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:43.542 Available Spare: 0% 00:36:43.542 Available Spare Threshold: 0% 00:36:43.542 Life Percentage Used: 0% 00:36:43.542 Data Units Read: 2454 00:36:43.542 Data Units Written: 2241 00:36:43.542 Host Read Commands: 112049 00:36:43.542 Host Write Commands: 110318 00:36:43.542 Controller Busy Time: 0 minutes 00:36:43.542 Power Cycles: 0 00:36:43.542 Power On Hours: 0 hours 00:36:43.542 Unsafe Shutdowns: 0 00:36:43.542 Unrecoverable Media Errors: 0 00:36:43.542 Lifetime Error Log Entries: 0 00:36:43.542 Warning Temperature Time: 0 minutes 00:36:43.542 Critical Temperature Time: 0 minutes 00:36:43.542 00:36:43.542 Number of Queues 00:36:43.542 ================ 00:36:43.542 Number of I/O Submission Queues: 64 00:36:43.542 Number of I/O Completion Queues: 64 00:36:43.542 00:36:43.542 ZNS Specific Controller Data 00:36:43.542 ============================ 00:36:43.542 Zone Append Size Limit: 0 00:36:43.542 00:36:43.542 00:36:43.542 Active Namespaces 00:36:43.542 ================= 00:36:43.542 Namespace ID:1 00:36:43.542 Error Recovery Timeout: Unlimited 00:36:43.542 Command Set Identifier: NVM (00h) 00:36:43.542 Deallocate: Supported 00:36:43.542 Deallocated/Unwritten Error: Supported 00:36:43.542 Deallocated Read Value: All 0x00 00:36:43.542 Deallocate in Write Zeroes: Not Supported 00:36:43.542 Deallocated Guard Field: 0xFFFF 00:36:43.542 Flush: Supported 00:36:43.542 Reservation: Not Supported 00:36:43.542 Namespace Sharing Capabilities: Private 00:36:43.542 Size (in LBAs): 1048576 (4GiB) 00:36:43.542 Capacity (in LBAs): 1048576 (4GiB) 00:36:43.542 Utilization (in LBAs): 1048576 (4GiB) 00:36:43.542 Thin Provisioning: Not Supported 00:36:43.542 Per-NS Atomic Units: No 00:36:43.542 Maximum Single Source Range Length: 128 00:36:43.542 Maximum Copy Length: 128 00:36:43.542 Maximum Source Range Count: 128 00:36:43.542 NGUID/EUI64 Never Reused: No 00:36:43.542 Namespace Write Protected: No 00:36:43.542 Number of LBA Formats: 8 00:36:43.542 Current LBA Format: LBA Format #04 00:36:43.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.542 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:43.542 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:43.542 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:43.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:43.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:43.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:43.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:43.543 00:36:43.543 NVM Specific Namespace Data 00:36:43.543 =========================== 00:36:43.543 Logical Block Storage Tag Mask: 0 00:36:43.543 Protection Information Capabilities: 00:36:43.543 16b Guard Protection Information Storage Tag Support: No 00:36:43.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:43.543 Storage Tag Check Read Support: No 00:36:43.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Namespace ID:2 00:36:43.543 Error Recovery Timeout: Unlimited 00:36:43.543 Command Set Identifier: NVM (00h) 00:36:43.543 Deallocate: Supported 00:36:43.543 Deallocated/Unwritten Error: Supported 00:36:43.543 Deallocated Read Value: All 0x00 00:36:43.543 Deallocate in Write Zeroes: Not Supported 00:36:43.543 Deallocated Guard Field: 0xFFFF 00:36:43.543 Flush: Supported 00:36:43.543 Reservation: Not Supported 00:36:43.543 Namespace Sharing Capabilities: Private 00:36:43.543 Size (in LBAs): 1048576 (4GiB) 00:36:43.543 Capacity (in LBAs): 1048576 (4GiB) 00:36:43.543 Utilization (in LBAs): 1048576 (4GiB) 00:36:43.543 Thin Provisioning: Not Supported 00:36:43.543 Per-NS Atomic Units: No 00:36:43.543 Maximum Single Source Range Length: 128 00:36:43.543 Maximum Copy Length: 128 00:36:43.543 Maximum Source Range Count: 128 00:36:43.543 NGUID/EUI64 Never Reused: No 00:36:43.543 Namespace Write Protected: No 00:36:43.543 Number of LBA Formats: 8 00:36:43.543 Current LBA Format: LBA Format #04 00:36:43.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:43.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:43.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:43.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:43.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:43.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:43.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:43.543 00:36:43.543 NVM Specific Namespace Data 00:36:43.543 =========================== 00:36:43.543 Logical Block Storage Tag Mask: 0 00:36:43.543 Protection Information Capabilities: 00:36:43.543 16b Guard Protection Information Storage Tag Support: No 00:36:43.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:43.543 Storage Tag Check Read Support: No 00:36:43.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Namespace ID:3 00:36:43.543 Error Recovery Timeout: Unlimited 00:36:43.543 Command Set Identifier: NVM (00h) 00:36:43.543 Deallocate: Supported 00:36:43.543 Deallocated/Unwritten Error: Supported 00:36:43.543 Deallocated Read Value: All 0x00 00:36:43.543 Deallocate in Write Zeroes: Not Supported 00:36:43.543 Deallocated Guard Field: 0xFFFF 00:36:43.543 Flush: Supported 00:36:43.543 Reservation: Not Supported 00:36:43.543 Namespace Sharing Capabilities: Private 00:36:43.543 Size (in LBAs): 1048576 (4GiB) 00:36:43.543 Capacity (in LBAs): 1048576 (4GiB) 00:36:43.543 Utilization (in LBAs): 1048576 (4GiB) 00:36:43.543 Thin Provisioning: Not Supported 00:36:43.543 Per-NS Atomic Units: No 00:36:43.543 Maximum Single Source Range Length: 128 00:36:43.543 Maximum Copy Length: 128 00:36:43.543 Maximum Source Range Count: 128 00:36:43.543 NGUID/EUI64 Never Reused: No 00:36:43.543 Namespace Write Protected: No 00:36:43.543 Number of LBA Formats: 8 00:36:43.543 Current LBA Format: LBA Format #04 00:36:43.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:43.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:43.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:43.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:43.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:43.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:43.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:43.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:43.543 00:36:43.543 NVM Specific Namespace Data 00:36:43.543 =========================== 00:36:43.543 Logical Block Storage Tag Mask: 0 00:36:43.543 Protection Information Capabilities: 00:36:43.543 16b Guard Protection Information Storage Tag Support: No 00:36:43.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:43.543 Storage Tag Check Read Support: No 00:36:43.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:43.802 17:33:44 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:36:43.802 17:33:44 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:36:44.062 ===================================================== 00:36:44.062 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:44.062 ===================================================== 00:36:44.062 Controller Capabilities/Features 00:36:44.062 ================================ 00:36:44.062 Vendor ID: 1b36 00:36:44.062 Subsystem Vendor ID: 1af4 00:36:44.062 Serial Number: 12343 00:36:44.062 Model Number: QEMU NVMe Ctrl 00:36:44.062 Firmware Version: 8.0.0 00:36:44.062 Recommended Arb Burst: 6 00:36:44.062 IEEE OUI Identifier: 00 54 52 00:36:44.062 Multi-path I/O 00:36:44.062 May have multiple subsystem ports: No 00:36:44.062 May have multiple controllers: Yes 00:36:44.062 Associated with SR-IOV VF: No 00:36:44.062 Max Data Transfer Size: 524288 00:36:44.062 Max Number of Namespaces: 256 00:36:44.062 Max Number of I/O Queues: 64 00:36:44.062 NVMe Specification Version (VS): 1.4 00:36:44.062 NVMe Specification Version (Identify): 1.4 00:36:44.062 Maximum Queue Entries: 2048 00:36:44.062 Contiguous Queues Required: Yes 00:36:44.062 Arbitration Mechanisms Supported 00:36:44.062 Weighted Round Robin: Not Supported 00:36:44.062 Vendor Specific: Not Supported 00:36:44.062 Reset Timeout: 7500 ms 00:36:44.062 Doorbell Stride: 4 bytes 00:36:44.062 NVM Subsystem Reset: Not Supported 00:36:44.062 Command Sets Supported 00:36:44.062 NVM Command Set: Supported 00:36:44.062 Boot Partition: Not Supported 00:36:44.062 Memory Page Size Minimum: 4096 bytes 00:36:44.062 Memory Page Size Maximum: 65536 bytes 00:36:44.062 Persistent Memory Region: Not Supported 00:36:44.062 Optional Asynchronous Events Supported 00:36:44.062 Namespace Attribute Notices: Supported 00:36:44.062 Firmware Activation Notices: Not Supported 00:36:44.062 ANA Change Notices: Not Supported 00:36:44.062 PLE Aggregate Log Change Notices: Not Supported 00:36:44.062 LBA Status Info Alert Notices: Not Supported 00:36:44.062 EGE Aggregate Log Change Notices: Not Supported 00:36:44.062 Normal NVM Subsystem Shutdown event: Not Supported 00:36:44.062 Zone Descriptor Change Notices: Not Supported 00:36:44.062 Discovery Log Change Notices: Not Supported 00:36:44.062 Controller Attributes 00:36:44.062 128-bit Host Identifier: Not Supported 00:36:44.062 Non-Operational Permissive Mode: Not Supported 00:36:44.062 NVM Sets: Not Supported 00:36:44.062 Read Recovery Levels: Not Supported 00:36:44.062 Endurance Groups: Supported 00:36:44.062 Predictable Latency Mode: Not Supported 00:36:44.062 Traffic Based Keep ALive: Not Supported 00:36:44.062 Namespace Granularity: Not Supported 00:36:44.062 SQ Associations: Not Supported 00:36:44.062 UUID List: Not Supported 00:36:44.062 Multi-Domain Subsystem: Not Supported 00:36:44.062 Fixed Capacity Management: Not Supported 00:36:44.062 Variable Capacity Management: Not Supported 00:36:44.062 Delete Endurance Group: Not Supported 00:36:44.062 Delete NVM Set: Not Supported 00:36:44.062 Extended LBA Formats Supported: Supported 00:36:44.062 Flexible Data Placement Supported: Supported 00:36:44.062 00:36:44.062 Controller Memory Buffer Support 00:36:44.062 ================================ 00:36:44.062 Supported: No 00:36:44.062 00:36:44.062 Persistent Memory Region Support 00:36:44.062 ================================ 00:36:44.062 Supported: No 00:36:44.062 00:36:44.062 Admin Command Set Attributes 00:36:44.062 ============================ 00:36:44.062 Security Send/Receive: Not Supported 00:36:44.062 Format NVM: Supported 00:36:44.062 Firmware Activate/Download: Not Supported 00:36:44.062 Namespace Management: Supported 00:36:44.062 Device Self-Test: Not Supported 00:36:44.062 Directives: Supported 00:36:44.062 NVMe-MI: Not Supported 00:36:44.062 Virtualization Management: Not Supported 00:36:44.062 Doorbell Buffer Config: Supported 00:36:44.062 Get LBA Status Capability: Not Supported 00:36:44.062 Command & Feature Lockdown Capability: Not Supported 00:36:44.062 Abort Command Limit: 4 00:36:44.062 Async Event Request Limit: 4 00:36:44.062 Number of Firmware Slots: N/A 00:36:44.062 Firmware Slot 1 Read-Only: N/A 00:36:44.062 Firmware Activation Without Reset: N/A 00:36:44.062 Multiple Update Detection Support: N/A 00:36:44.062 Firmware Update Granularity: No Information Provided 00:36:44.062 Per-Namespace SMART Log: Yes 00:36:44.062 Asymmetric Namespace Access Log Page: Not Supported 00:36:44.062 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:36:44.062 Command Effects Log Page: Supported 00:36:44.062 Get Log Page Extended Data: Supported 00:36:44.062 Telemetry Log Pages: Not Supported 00:36:44.062 Persistent Event Log Pages: Not Supported 00:36:44.062 Supported Log Pages Log Page: May Support 00:36:44.062 Commands Supported & Effects Log Page: Not Supported 00:36:44.062 Feature Identifiers & Effects Log Page:May Support 00:36:44.062 NVMe-MI Commands & Effects Log Page: May Support 00:36:44.062 Data Area 4 for Telemetry Log: Not Supported 00:36:44.062 Error Log Page Entries Supported: 1 00:36:44.062 Keep Alive: Not Supported 00:36:44.062 00:36:44.062 NVM Command Set Attributes 00:36:44.062 ========================== 00:36:44.062 Submission Queue Entry Size 00:36:44.062 Max: 64 00:36:44.062 Min: 64 00:36:44.062 Completion Queue Entry Size 00:36:44.062 Max: 16 00:36:44.062 Min: 16 00:36:44.062 Number of Namespaces: 256 00:36:44.062 Compare Command: Supported 00:36:44.062 Write Uncorrectable Command: Not Supported 00:36:44.062 Dataset Management Command: Supported 00:36:44.062 Write Zeroes Command: Supported 00:36:44.062 Set Features Save Field: Supported 00:36:44.062 Reservations: Not Supported 00:36:44.062 Timestamp: Supported 00:36:44.062 Copy: Supported 00:36:44.062 Volatile Write Cache: Present 00:36:44.062 Atomic Write Unit (Normal): 1 00:36:44.062 Atomic Write Unit (PFail): 1 00:36:44.062 Atomic Compare & Write Unit: 1 00:36:44.062 Fused Compare & Write: Not Supported 00:36:44.062 Scatter-Gather List 00:36:44.062 SGL Command Set: Supported 00:36:44.062 SGL Keyed: Not Supported 00:36:44.062 SGL Bit Bucket Descriptor: Not Supported 00:36:44.062 SGL Metadata Pointer: Not Supported 00:36:44.062 Oversized SGL: Not Supported 00:36:44.062 SGL Metadata Address: Not Supported 00:36:44.062 SGL Offset: Not Supported 00:36:44.062 Transport SGL Data Block: Not Supported 00:36:44.062 Replay Protected Memory Block: Not Supported 00:36:44.062 00:36:44.062 Firmware Slot Information 00:36:44.062 ========================= 00:36:44.062 Active slot: 1 00:36:44.062 Slot 1 Firmware Revision: 1.0 00:36:44.062 00:36:44.062 00:36:44.062 Commands Supported and Effects 00:36:44.062 ============================== 00:36:44.062 Admin Commands 00:36:44.062 -------------- 00:36:44.062 Delete I/O Submission Queue (00h): Supported 00:36:44.062 Create I/O Submission Queue (01h): Supported 00:36:44.062 Get Log Page (02h): Supported 00:36:44.062 Delete I/O Completion Queue (04h): Supported 00:36:44.062 Create I/O Completion Queue (05h): Supported 00:36:44.063 Identify (06h): Supported 00:36:44.063 Abort (08h): Supported 00:36:44.063 Set Features (09h): Supported 00:36:44.063 Get Features (0Ah): Supported 00:36:44.063 Asynchronous Event Request (0Ch): Supported 00:36:44.063 Namespace Attachment (15h): Supported NS-Inventory-Change 00:36:44.063 Directive Send (19h): Supported 00:36:44.063 Directive Receive (1Ah): Supported 00:36:44.063 Virtualization Management (1Ch): Supported 00:36:44.063 Doorbell Buffer Config (7Ch): Supported 00:36:44.063 Format NVM (80h): Supported LBA-Change 00:36:44.063 I/O Commands 00:36:44.063 ------------ 00:36:44.063 Flush (00h): Supported LBA-Change 00:36:44.063 Write (01h): Supported LBA-Change 00:36:44.063 Read (02h): Supported 00:36:44.063 Compare (05h): Supported 00:36:44.063 Write Zeroes (08h): Supported LBA-Change 00:36:44.063 Dataset Management (09h): Supported LBA-Change 00:36:44.063 Unknown (0Ch): Supported 00:36:44.063 Unknown (12h): Supported 00:36:44.063 Copy (19h): Supported LBA-Change 00:36:44.063 Unknown (1Dh): Supported LBA-Change 00:36:44.063 00:36:44.063 Error Log 00:36:44.063 ========= 00:36:44.063 00:36:44.063 Arbitration 00:36:44.063 =========== 00:36:44.063 Arbitration Burst: no limit 00:36:44.063 00:36:44.063 Power Management 00:36:44.063 ================ 00:36:44.063 Number of Power States: 1 00:36:44.063 Current Power State: Power State #0 00:36:44.063 Power State #0: 00:36:44.063 Max Power: 25.00 W 00:36:44.063 Non-Operational State: Operational 00:36:44.063 Entry Latency: 16 microseconds 00:36:44.063 Exit Latency: 4 microseconds 00:36:44.063 Relative Read Throughput: 0 00:36:44.063 Relative Read Latency: 0 00:36:44.063 Relative Write Throughput: 0 00:36:44.063 Relative Write Latency: 0 00:36:44.063 Idle Power: Not Reported 00:36:44.063 Active Power: Not Reported 00:36:44.063 Non-Operational Permissive Mode: Not Supported 00:36:44.063 00:36:44.063 Health Information 00:36:44.063 ================== 00:36:44.063 Critical Warnings: 00:36:44.063 Available Spare Space: OK 00:36:44.063 Temperature: OK 00:36:44.063 Device Reliability: OK 00:36:44.063 Read Only: No 00:36:44.063 Volatile Memory Backup: OK 00:36:44.063 Current Temperature: 323 Kelvin (50 Celsius) 00:36:44.063 Temperature Threshold: 343 Kelvin (70 Celsius) 00:36:44.063 Available Spare: 0% 00:36:44.063 Available Spare Threshold: 0% 00:36:44.063 Life Percentage Used: 0% 00:36:44.063 Data Units Read: 874 00:36:44.063 Data Units Written: 803 00:36:44.063 Host Read Commands: 37802 00:36:44.063 Host Write Commands: 37228 00:36:44.063 Controller Busy Time: 0 minutes 00:36:44.063 Power Cycles: 0 00:36:44.063 Power On Hours: 0 hours 00:36:44.063 Unsafe Shutdowns: 0 00:36:44.063 Unrecoverable Media Errors: 0 00:36:44.063 Lifetime Error Log Entries: 0 00:36:44.063 Warning Temperature Time: 0 minutes 00:36:44.063 Critical Temperature Time: 0 minutes 00:36:44.063 00:36:44.063 Number of Queues 00:36:44.063 ================ 00:36:44.063 Number of I/O Submission Queues: 64 00:36:44.063 Number of I/O Completion Queues: 64 00:36:44.063 00:36:44.063 ZNS Specific Controller Data 00:36:44.063 ============================ 00:36:44.063 Zone Append Size Limit: 0 00:36:44.063 00:36:44.063 00:36:44.063 Active Namespaces 00:36:44.063 ================= 00:36:44.063 Namespace ID:1 00:36:44.063 Error Recovery Timeout: Unlimited 00:36:44.063 Command Set Identifier: NVM (00h) 00:36:44.063 Deallocate: Supported 00:36:44.063 Deallocated/Unwritten Error: Supported 00:36:44.063 Deallocated Read Value: All 0x00 00:36:44.063 Deallocate in Write Zeroes: Not Supported 00:36:44.063 Deallocated Guard Field: 0xFFFF 00:36:44.063 Flush: Supported 00:36:44.063 Reservation: Not Supported 00:36:44.063 Namespace Sharing Capabilities: Multiple Controllers 00:36:44.063 Size (in LBAs): 262144 (1GiB) 00:36:44.063 Capacity (in LBAs): 262144 (1GiB) 00:36:44.063 Utilization (in LBAs): 262144 (1GiB) 00:36:44.063 Thin Provisioning: Not Supported 00:36:44.063 Per-NS Atomic Units: No 00:36:44.063 Maximum Single Source Range Length: 128 00:36:44.063 Maximum Copy Length: 128 00:36:44.063 Maximum Source Range Count: 128 00:36:44.063 NGUID/EUI64 Never Reused: No 00:36:44.063 Namespace Write Protected: No 00:36:44.063 Endurance group ID: 1 00:36:44.063 Number of LBA Formats: 8 00:36:44.063 Current LBA Format: LBA Format #04 00:36:44.063 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:44.063 LBA Format #01: Data Size: 512 Metadata Size: 8 00:36:44.063 LBA Format #02: Data Size: 512 Metadata Size: 16 00:36:44.063 LBA Format #03: Data Size: 512 Metadata Size: 64 00:36:44.063 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:36:44.063 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:36:44.063 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:36:44.063 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:36:44.063 00:36:44.063 Get Feature FDP: 00:36:44.063 ================ 00:36:44.063 Enabled: Yes 00:36:44.063 FDP configuration index: 0 00:36:44.063 00:36:44.063 FDP configurations log page 00:36:44.063 =========================== 00:36:44.063 Number of FDP configurations: 1 00:36:44.063 Version: 0 00:36:44.063 Size: 112 00:36:44.063 FDP Configuration Descriptor: 0 00:36:44.063 Descriptor Size: 96 00:36:44.063 Reclaim Group Identifier format: 2 00:36:44.063 FDP Volatile Write Cache: Not Present 00:36:44.063 FDP Configuration: Valid 00:36:44.063 Vendor Specific Size: 0 00:36:44.063 Number of Reclaim Groups: 2 00:36:44.063 Number of Recalim Unit Handles: 8 00:36:44.063 Max Placement Identifiers: 128 00:36:44.063 Number of Namespaces Suppprted: 256 00:36:44.063 Reclaim unit Nominal Size: 6000000 bytes 00:36:44.063 Estimated Reclaim Unit Time Limit: Not Reported 00:36:44.063 RUH Desc #000: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #001: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #002: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #003: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #004: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #005: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #006: RUH Type: Initially Isolated 00:36:44.063 RUH Desc #007: RUH Type: Initially Isolated 00:36:44.063 00:36:44.063 FDP reclaim unit handle usage log page 00:36:44.063 ====================================== 00:36:44.063 Number of Reclaim Unit Handles: 8 00:36:44.063 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:36:44.063 RUH Usage Desc #001: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #002: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #003: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #004: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #005: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #006: RUH Attributes: Unused 00:36:44.063 RUH Usage Desc #007: RUH Attributes: Unused 00:36:44.063 00:36:44.063 FDP statistics log page 00:36:44.063 ======================= 00:36:44.063 Host bytes with metadata written: 512663552 00:36:44.063 Media bytes with metadata written: 512720896 00:36:44.063 Media bytes erased: 0 00:36:44.063 00:36:44.063 FDP events log page 00:36:44.063 =================== 00:36:44.063 Number of FDP events: 0 00:36:44.063 00:36:44.063 NVM Specific Namespace Data 00:36:44.063 =========================== 00:36:44.063 Logical Block Storage Tag Mask: 0 00:36:44.063 Protection Information Capabilities: 00:36:44.063 16b Guard Protection Information Storage Tag Support: No 00:36:44.063 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:36:44.063 Storage Tag Check Read Support: No 00:36:44.063 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:36:44.063 00:36:44.063 real 0m1.730s 00:36:44.063 user 0m0.620s 00:36:44.063 sys 0m0.898s 00:36:44.063 17:33:44 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:44.063 17:33:44 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:36:44.063 ************************************ 00:36:44.063 END TEST nvme_identify 00:36:44.063 ************************************ 00:36:44.063 17:33:44 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:36:44.063 17:33:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.063 17:33:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.063 17:33:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:44.063 ************************************ 00:36:44.063 START TEST nvme_perf 00:36:44.063 ************************************ 00:36:44.063 17:33:44 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:36:44.063 17:33:44 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:36:45.441 Initializing NVMe Controllers 00:36:45.441 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:45.441 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:45.441 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:45.441 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:45.441 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:36:45.441 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:36:45.441 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:36:45.441 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:36:45.441 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:36:45.441 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:36:45.441 Initialization complete. Launching workers. 00:36:45.441 ======================================================== 00:36:45.441 Latency(us) 00:36:45.441 Device Information : IOPS MiB/s Average min max 00:36:45.441 PCIE (0000:00:10.0) NSID 1 from core 0: 13468.00 157.83 9525.33 8044.47 49497.51 00:36:45.441 PCIE (0000:00:11.0) NSID 1 from core 0: 13468.00 157.83 9510.68 8174.97 47258.36 00:36:45.441 PCIE (0000:00:13.0) NSID 1 from core 0: 13468.00 157.83 9494.33 8147.44 45806.83 00:36:45.441 PCIE (0000:00:12.0) NSID 1 from core 0: 13468.00 157.83 9478.28 8140.63 43662.84 00:36:45.441 PCIE (0000:00:12.0) NSID 2 from core 0: 13468.00 157.83 9461.78 8145.52 41728.28 00:36:45.441 PCIE (0000:00:12.0) NSID 3 from core 0: 13531.83 158.58 9400.61 8124.98 34799.18 00:36:45.441 ======================================================== 00:36:45.441 Total : 80871.83 947.72 9478.44 8044.47 49497.51 00:36:45.441 00:36:45.441 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:45.441 ================================================================================= 00:36:45.441 1.00000% : 8264.379us 00:36:45.441 10.00000% : 8474.937us 00:36:45.441 25.00000% : 8685.494us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9264.527us 00:36:45.442 90.00000% : 9685.642us 00:36:45.442 95.00000% : 11106.904us 00:36:45.442 98.00000% : 15791.807us 00:36:45.442 99.00000% : 18844.890us 00:36:45.442 99.50000% : 42532.601us 00:36:45.442 99.90000% : 49270.439us 00:36:45.442 99.99000% : 49480.996us 00:36:45.442 99.99900% : 49691.553us 00:36:45.442 99.99990% : 49691.553us 00:36:45.442 99.99999% : 49691.553us 00:36:45.442 00:36:45.442 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:45.442 ================================================================================= 00:36:45.442 1.00000% : 8369.658us 00:36:45.442 10.00000% : 8580.215us 00:36:45.442 25.00000% : 8738.133us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9211.888us 00:36:45.442 90.00000% : 9633.002us 00:36:45.442 95.00000% : 11317.462us 00:36:45.442 98.00000% : 16318.201us 00:36:45.442 99.00000% : 18318.496us 00:36:45.442 99.50000% : 40848.141us 00:36:45.442 99.90000% : 46954.307us 00:36:45.442 99.99000% : 47375.422us 00:36:45.442 99.99900% : 47375.422us 00:36:45.442 99.99990% : 47375.422us 00:36:45.442 99.99999% : 47375.422us 00:36:45.442 00:36:45.442 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:45.442 ================================================================================= 00:36:45.442 1.00000% : 8369.658us 00:36:45.442 10.00000% : 8527.576us 00:36:45.442 25.00000% : 8738.133us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9264.527us 00:36:45.442 90.00000% : 9633.002us 00:36:45.442 95.00000% : 11422.741us 00:36:45.442 98.00000% : 16634.037us 00:36:45.442 99.00000% : 17897.382us 00:36:45.442 99.50000% : 39584.797us 00:36:45.442 99.90000% : 45480.405us 00:36:45.442 99.99000% : 45901.520us 00:36:45.442 99.99900% : 45901.520us 00:36:45.442 99.99990% : 45901.520us 00:36:45.442 99.99999% : 45901.520us 00:36:45.442 00:36:45.442 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:45.442 ================================================================================= 00:36:45.442 1.00000% : 8369.658us 00:36:45.442 10.00000% : 8527.576us 00:36:45.442 25.00000% : 8738.133us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9264.527us 00:36:45.442 90.00000% : 9633.002us 00:36:45.442 95.00000% : 11264.822us 00:36:45.442 98.00000% : 16212.922us 00:36:45.442 99.00000% : 18529.054us 00:36:45.442 99.50000% : 37689.780us 00:36:45.442 99.90000% : 43374.831us 00:36:45.442 99.99000% : 43795.945us 00:36:45.442 99.99900% : 43795.945us 00:36:45.442 99.99990% : 43795.945us 00:36:45.442 99.99999% : 43795.945us 00:36:45.442 00:36:45.442 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:45.442 ================================================================================= 00:36:45.442 1.00000% : 8369.658us 00:36:45.442 10.00000% : 8527.576us 00:36:45.442 25.00000% : 8738.133us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9211.888us 00:36:45.442 90.00000% : 9633.002us 00:36:45.442 95.00000% : 11264.822us 00:36:45.442 98.00000% : 15686.529us 00:36:45.442 99.00000% : 19581.841us 00:36:45.442 99.50000% : 35373.648us 00:36:45.442 99.90000% : 41479.814us 00:36:45.442 99.99000% : 41900.929us 00:36:45.442 99.99900% : 41900.929us 00:36:45.442 99.99990% : 41900.929us 00:36:45.442 99.99999% : 41900.929us 00:36:45.442 00:36:45.442 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:45.442 ================================================================================= 00:36:45.442 1.00000% : 8369.658us 00:36:45.442 10.00000% : 8580.215us 00:36:45.442 25.00000% : 8738.133us 00:36:45.442 50.00000% : 9001.330us 00:36:45.442 75.00000% : 9264.527us 00:36:45.442 90.00000% : 9685.642us 00:36:45.442 95.00000% : 11422.741us 00:36:45.442 98.00000% : 15475.971us 00:36:45.442 99.00000% : 19687.120us 00:36:45.442 99.50000% : 28214.696us 00:36:45.442 99.90000% : 34531.418us 00:36:45.442 99.99000% : 34952.533us 00:36:45.442 99.99900% : 34952.533us 00:36:45.442 99.99990% : 34952.533us 00:36:45.442 99.99999% : 34952.533us 00:36:45.442 00:36:45.442 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:45.442 ============================================================================== 00:36:45.442 Range in us Cumulative IO count 00:36:45.442 8001.182 - 8053.822: 0.0074% ( 1) 00:36:45.442 8053.822 - 8106.461: 0.0815% ( 10) 00:36:45.442 8106.461 - 8159.100: 0.1999% ( 16) 00:36:45.442 8159.100 - 8211.740: 0.4369% ( 32) 00:36:45.442 8211.740 - 8264.379: 1.1996% ( 103) 00:36:45.442 8264.379 - 8317.018: 2.5622% ( 184) 00:36:45.442 8317.018 - 8369.658: 4.3839% ( 246) 00:36:45.442 8369.658 - 8422.297: 7.1164% ( 369) 00:36:45.442 8422.297 - 8474.937: 10.3377% ( 435) 00:36:45.442 8474.937 - 8527.576: 14.0181% ( 497) 00:36:45.442 8527.576 - 8580.215: 17.7207% ( 500) 00:36:45.442 8580.215 - 8632.855: 21.9861% ( 576) 00:36:45.442 8632.855 - 8685.494: 26.3403% ( 588) 00:36:45.442 8685.494 - 8738.133: 30.6576% ( 583) 00:36:45.442 8738.133 - 8790.773: 35.1155% ( 602) 00:36:45.442 8790.773 - 8843.412: 39.4920% ( 591) 00:36:45.442 8843.412 - 8896.051: 44.1573% ( 630) 00:36:45.442 8896.051 - 8948.691: 48.6374% ( 605) 00:36:45.442 8948.691 - 9001.330: 53.1991% ( 616) 00:36:45.442 9001.330 - 9053.969: 57.7903% ( 620) 00:36:45.442 9053.969 - 9106.609: 62.4259% ( 626) 00:36:45.442 9106.609 - 9159.248: 66.8246% ( 594) 00:36:45.442 9159.248 - 9211.888: 71.4973% ( 631) 00:36:45.442 9211.888 - 9264.527: 75.4739% ( 537) 00:36:45.442 9264.527 - 9317.166: 79.2728% ( 513) 00:36:45.442 9317.166 - 9369.806: 82.1608% ( 390) 00:36:45.442 9369.806 - 9422.445: 84.3306% ( 293) 00:36:45.442 9422.445 - 9475.084: 86.2115% ( 254) 00:36:45.442 9475.084 - 9527.724: 87.6333% ( 192) 00:36:45.442 9527.724 - 9580.363: 88.7589% ( 152) 00:36:45.442 9580.363 - 9633.002: 89.6105% ( 115) 00:36:45.442 9633.002 - 9685.642: 90.2621% ( 88) 00:36:45.442 9685.642 - 9738.281: 90.7583% ( 67) 00:36:45.442 9738.281 - 9790.920: 91.2248% ( 63) 00:36:45.442 9790.920 - 9843.560: 91.5062% ( 38) 00:36:45.442 9843.560 - 9896.199: 91.7802% ( 37) 00:36:45.442 9896.199 - 9948.839: 92.0838% ( 41) 00:36:45.442 9948.839 - 10001.478: 92.3800% ( 40) 00:36:45.442 10001.478 - 10054.117: 92.6466% ( 36) 00:36:45.442 10054.117 - 10106.757: 92.8688% ( 30) 00:36:45.442 10106.757 - 10159.396: 93.0465% ( 24) 00:36:45.442 10159.396 - 10212.035: 93.1872% ( 19) 00:36:45.442 10212.035 - 10264.675: 93.3427% ( 21) 00:36:45.442 10264.675 - 10317.314: 93.4834% ( 19) 00:36:45.442 10317.314 - 10369.953: 93.6389% ( 21) 00:36:45.442 10369.953 - 10422.593: 93.7648% ( 17) 00:36:45.442 10422.593 - 10475.232: 93.9055% ( 19) 00:36:45.442 10475.232 - 10527.871: 94.0388% ( 18) 00:36:45.442 10527.871 - 10580.511: 94.1573% ( 16) 00:36:45.442 10580.511 - 10633.150: 94.2387% ( 11) 00:36:45.442 10633.150 - 10685.790: 94.3424% ( 14) 00:36:45.442 10685.790 - 10738.429: 94.4313% ( 12) 00:36:45.442 10738.429 - 10791.068: 94.5498% ( 16) 00:36:45.442 10791.068 - 10843.708: 94.6534% ( 14) 00:36:45.442 10843.708 - 10896.347: 94.7645% ( 15) 00:36:45.442 10896.347 - 10948.986: 94.8534% ( 12) 00:36:45.442 10948.986 - 11001.626: 94.9052% ( 7) 00:36:45.442 11001.626 - 11054.265: 94.9496% ( 6) 00:36:45.442 11054.265 - 11106.904: 95.0163% ( 9) 00:36:45.442 11106.904 - 11159.544: 95.0607% ( 6) 00:36:45.442 11159.544 - 11212.183: 95.1200% ( 8) 00:36:45.442 11212.183 - 11264.822: 95.1718% ( 7) 00:36:45.442 11264.822 - 11317.462: 95.2310% ( 8) 00:36:45.442 11317.462 - 11370.101: 95.2903% ( 8) 00:36:45.442 11370.101 - 11422.741: 95.3717% ( 11) 00:36:45.442 11422.741 - 11475.380: 95.4236% ( 7) 00:36:45.442 11475.380 - 11528.019: 95.4754% ( 7) 00:36:45.442 11528.019 - 11580.659: 95.5421% ( 9) 00:36:45.442 11580.659 - 11633.298: 95.5791% ( 5) 00:36:45.442 11633.298 - 11685.937: 95.6309% ( 7) 00:36:45.442 11685.937 - 11738.577: 95.6976% ( 9) 00:36:45.442 11738.577 - 11791.216: 95.7420% ( 6) 00:36:45.442 11791.216 - 11843.855: 95.8012% ( 8) 00:36:45.442 11843.855 - 11896.495: 95.8531% ( 7) 00:36:45.442 11896.495 - 11949.134: 95.8975% ( 6) 00:36:45.442 11949.134 - 12001.773: 95.9642% ( 9) 00:36:45.442 12001.773 - 12054.413: 95.9790% ( 2) 00:36:45.442 12054.413 - 12107.052: 96.0456% ( 9) 00:36:45.442 12107.052 - 12159.692: 96.0752% ( 4) 00:36:45.442 12159.692 - 12212.331: 96.1123% ( 5) 00:36:45.442 12212.331 - 12264.970: 96.1641% ( 7) 00:36:45.442 12264.970 - 12317.610: 96.2011% ( 5) 00:36:45.442 12317.610 - 12370.249: 96.2307% ( 4) 00:36:45.442 12370.249 - 12422.888: 96.2604% ( 4) 00:36:45.442 12422.888 - 12475.528: 96.3122% ( 7) 00:36:45.442 12475.528 - 12528.167: 96.3566% ( 6) 00:36:45.442 12528.167 - 12580.806: 96.3863% ( 4) 00:36:45.442 12580.806 - 12633.446: 96.4233% ( 5) 00:36:45.442 12633.446 - 12686.085: 96.4677% ( 6) 00:36:45.442 12686.085 - 12738.724: 96.5121% ( 6) 00:36:45.442 12738.724 - 12791.364: 96.5492% ( 5) 00:36:45.442 12791.364 - 12844.003: 96.5788% ( 4) 00:36:45.442 12844.003 - 12896.643: 96.6232% ( 6) 00:36:45.442 12896.643 - 12949.282: 96.6602% ( 5) 00:36:45.442 12949.282 - 13001.921: 96.6677% ( 1) 00:36:45.442 13001.921 - 13054.561: 96.6825% ( 2) 00:36:45.442 13107.200 - 13159.839: 96.6899% ( 1) 00:36:45.442 13159.839 - 13212.479: 96.7121% ( 3) 00:36:45.442 13212.479 - 13265.118: 96.7195% ( 1) 00:36:45.442 13265.118 - 13317.757: 96.7269% ( 1) 00:36:45.442 13317.757 - 13370.397: 96.7417% ( 2) 00:36:45.443 13370.397 - 13423.036: 96.7491% ( 1) 00:36:45.443 13423.036 - 13475.676: 96.7565% ( 1) 00:36:45.443 13475.676 - 13580.954: 96.7713% ( 2) 00:36:45.443 13580.954 - 13686.233: 96.8009% ( 4) 00:36:45.443 13686.233 - 13791.512: 96.8158% ( 2) 00:36:45.443 13791.512 - 13896.790: 96.8454% ( 4) 00:36:45.443 13896.790 - 14002.069: 96.8602% ( 2) 00:36:45.443 14002.069 - 14107.348: 96.8824% ( 3) 00:36:45.443 14107.348 - 14212.627: 96.9120% ( 4) 00:36:45.443 14212.627 - 14317.905: 96.9268% ( 2) 00:36:45.443 14317.905 - 14423.184: 96.9491% ( 3) 00:36:45.443 14423.184 - 14528.463: 96.9713% ( 3) 00:36:45.443 14528.463 - 14633.741: 97.0009% ( 4) 00:36:45.443 14633.741 - 14739.020: 97.0749% ( 10) 00:36:45.443 14739.020 - 14844.299: 97.1638% ( 12) 00:36:45.443 14844.299 - 14949.578: 97.2379% ( 10) 00:36:45.443 14949.578 - 15054.856: 97.3119% ( 10) 00:36:45.443 15054.856 - 15160.135: 97.4008% ( 12) 00:36:45.443 15160.135 - 15265.414: 97.5341% ( 18) 00:36:45.443 15265.414 - 15370.692: 97.6674% ( 18) 00:36:45.443 15370.692 - 15475.971: 97.7858% ( 16) 00:36:45.443 15475.971 - 15581.250: 97.8821% ( 13) 00:36:45.443 15581.250 - 15686.529: 97.9932% ( 15) 00:36:45.443 15686.529 - 15791.807: 98.0969% ( 14) 00:36:45.443 15791.807 - 15897.086: 98.2079% ( 15) 00:36:45.443 15897.086 - 16002.365: 98.2968% ( 12) 00:36:45.443 16002.365 - 16107.643: 98.4079% ( 15) 00:36:45.443 16107.643 - 16212.922: 98.4819% ( 10) 00:36:45.443 16212.922 - 16318.201: 98.5338% ( 7) 00:36:45.443 16318.201 - 16423.480: 98.5560% ( 3) 00:36:45.443 16423.480 - 16528.758: 98.5782% ( 3) 00:36:45.443 17686.824 - 17792.103: 98.6226% ( 6) 00:36:45.443 17792.103 - 17897.382: 98.6523% ( 4) 00:36:45.443 17897.382 - 18002.660: 98.6893% ( 5) 00:36:45.443 18002.660 - 18107.939: 98.7337% ( 6) 00:36:45.443 18107.939 - 18213.218: 98.7633% ( 4) 00:36:45.443 18213.218 - 18318.496: 98.8004% ( 5) 00:36:45.443 18318.496 - 18423.775: 98.8448% ( 6) 00:36:45.443 18423.775 - 18529.054: 98.8818% ( 5) 00:36:45.443 18529.054 - 18634.333: 98.9188% ( 5) 00:36:45.443 18634.333 - 18739.611: 98.9559% ( 5) 00:36:45.443 18739.611 - 18844.890: 99.0003% ( 6) 00:36:45.443 18844.890 - 18950.169: 99.0373% ( 5) 00:36:45.443 18950.169 - 19055.447: 99.0521% ( 2) 00:36:45.443 40637.584 - 40848.141: 99.1040% ( 7) 00:36:45.443 40848.141 - 41058.699: 99.1558% ( 7) 00:36:45.443 41058.699 - 41269.256: 99.2150% ( 8) 00:36:45.443 41269.256 - 41479.814: 99.2669% ( 7) 00:36:45.443 41479.814 - 41690.371: 99.2965% ( 4) 00:36:45.443 41690.371 - 41900.929: 99.3557% ( 8) 00:36:45.443 41900.929 - 42111.486: 99.4076% ( 7) 00:36:45.443 42111.486 - 42322.043: 99.4594% ( 7) 00:36:45.443 42322.043 - 42532.601: 99.5261% ( 9) 00:36:45.443 47585.979 - 47796.537: 99.5779% ( 7) 00:36:45.443 47796.537 - 48007.094: 99.6297% ( 7) 00:36:45.443 48007.094 - 48217.651: 99.6816% ( 7) 00:36:45.443 48217.651 - 48428.209: 99.7260% ( 6) 00:36:45.443 48428.209 - 48638.766: 99.7778% ( 7) 00:36:45.443 48638.766 - 48849.324: 99.8371% ( 8) 00:36:45.443 48849.324 - 49059.881: 99.8963% ( 8) 00:36:45.443 49059.881 - 49270.439: 99.9482% ( 7) 00:36:45.443 49270.439 - 49480.996: 99.9926% ( 6) 00:36:45.443 49480.996 - 49691.553: 100.0000% ( 1) 00:36:45.443 00:36:45.443 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:45.443 ============================================================================== 00:36:45.443 Range in us Cumulative IO count 00:36:45.443 8159.100 - 8211.740: 0.0889% ( 12) 00:36:45.443 8211.740 - 8264.379: 0.2592% ( 23) 00:36:45.443 8264.379 - 8317.018: 0.7405% ( 65) 00:36:45.443 8317.018 - 8369.658: 1.7624% ( 138) 00:36:45.443 8369.658 - 8422.297: 3.7841% ( 273) 00:36:45.443 8422.297 - 8474.937: 6.5536% ( 374) 00:36:45.443 8474.937 - 8527.576: 9.9304% ( 456) 00:36:45.443 8527.576 - 8580.215: 13.7070% ( 510) 00:36:45.443 8580.215 - 8632.855: 18.1354% ( 598) 00:36:45.443 8632.855 - 8685.494: 23.0154% ( 659) 00:36:45.443 8685.494 - 8738.133: 28.0213% ( 676) 00:36:45.443 8738.133 - 8790.773: 33.1013% ( 686) 00:36:45.443 8790.773 - 8843.412: 38.4479% ( 722) 00:36:45.443 8843.412 - 8896.051: 43.8092% ( 724) 00:36:45.443 8896.051 - 8948.691: 49.2817% ( 739) 00:36:45.443 8948.691 - 9001.330: 54.6653% ( 727) 00:36:45.443 9001.330 - 9053.969: 60.0341% ( 725) 00:36:45.443 9053.969 - 9106.609: 65.3214% ( 714) 00:36:45.443 9106.609 - 9159.248: 70.3940% ( 685) 00:36:45.443 9159.248 - 9211.888: 75.0444% ( 628) 00:36:45.443 9211.888 - 9264.527: 78.8877% ( 519) 00:36:45.443 9264.527 - 9317.166: 81.9091% ( 408) 00:36:45.443 9317.166 - 9369.806: 84.3824% ( 334) 00:36:45.443 9369.806 - 9422.445: 86.4262% ( 276) 00:36:45.443 9422.445 - 9475.084: 87.9073% ( 200) 00:36:45.443 9475.084 - 9527.724: 89.0033% ( 148) 00:36:45.443 9527.724 - 9580.363: 89.8771% ( 118) 00:36:45.443 9580.363 - 9633.002: 90.5065% ( 85) 00:36:45.443 9633.002 - 9685.642: 90.9953% ( 66) 00:36:45.443 9685.642 - 9738.281: 91.3655% ( 50) 00:36:45.443 9738.281 - 9790.920: 91.7136% ( 47) 00:36:45.443 9790.920 - 9843.560: 92.0838% ( 50) 00:36:45.443 9843.560 - 9896.199: 92.4097% ( 44) 00:36:45.443 9896.199 - 9948.839: 92.6614% ( 34) 00:36:45.443 9948.839 - 10001.478: 92.8910% ( 31) 00:36:45.443 10001.478 - 10054.117: 93.0465% ( 21) 00:36:45.443 10054.117 - 10106.757: 93.2094% ( 22) 00:36:45.443 10106.757 - 10159.396: 93.3723% ( 22) 00:36:45.443 10159.396 - 10212.035: 93.5204% ( 20) 00:36:45.443 10212.035 - 10264.675: 93.6463% ( 17) 00:36:45.443 10264.675 - 10317.314: 93.8018% ( 21) 00:36:45.443 10317.314 - 10369.953: 93.9277% ( 17) 00:36:45.443 10369.953 - 10422.593: 94.0018% ( 10) 00:36:45.443 10422.593 - 10475.232: 94.0832% ( 11) 00:36:45.443 10475.232 - 10527.871: 94.1203% ( 5) 00:36:45.443 10527.871 - 10580.511: 94.1647% ( 6) 00:36:45.443 10580.511 - 10633.150: 94.1943% ( 4) 00:36:45.443 10633.150 - 10685.790: 94.2387% ( 6) 00:36:45.443 10685.790 - 10738.429: 94.2684% ( 4) 00:36:45.443 10738.429 - 10791.068: 94.3054% ( 5) 00:36:45.443 10791.068 - 10843.708: 94.3350% ( 4) 00:36:45.443 10843.708 - 10896.347: 94.4313% ( 13) 00:36:45.443 10896.347 - 10948.986: 94.5201% ( 12) 00:36:45.443 10948.986 - 11001.626: 94.5720% ( 7) 00:36:45.443 11001.626 - 11054.265: 94.6534% ( 11) 00:36:45.443 11054.265 - 11106.904: 94.7127% ( 8) 00:36:45.443 11106.904 - 11159.544: 94.7867% ( 10) 00:36:45.443 11159.544 - 11212.183: 94.8386% ( 7) 00:36:45.443 11212.183 - 11264.822: 94.9348% ( 13) 00:36:45.443 11264.822 - 11317.462: 95.0015% ( 9) 00:36:45.443 11317.462 - 11370.101: 95.0755% ( 10) 00:36:45.443 11370.101 - 11422.741: 95.1422% ( 9) 00:36:45.443 11422.741 - 11475.380: 95.2088% ( 9) 00:36:45.443 11475.380 - 11528.019: 95.2903% ( 11) 00:36:45.443 11528.019 - 11580.659: 95.3643% ( 10) 00:36:45.443 11580.659 - 11633.298: 95.4162% ( 7) 00:36:45.443 11633.298 - 11685.937: 95.4902% ( 10) 00:36:45.443 11685.937 - 11738.577: 95.5347% ( 6) 00:36:45.443 11738.577 - 11791.216: 95.5865% ( 7) 00:36:45.443 11791.216 - 11843.855: 95.6383% ( 7) 00:36:45.443 11843.855 - 11896.495: 95.6828% ( 6) 00:36:45.443 11896.495 - 11949.134: 95.7272% ( 6) 00:36:45.443 11949.134 - 12001.773: 95.7790% ( 7) 00:36:45.443 12001.773 - 12054.413: 95.8531% ( 10) 00:36:45.443 12054.413 - 12107.052: 95.9271% ( 10) 00:36:45.443 12107.052 - 12159.692: 95.9716% ( 6) 00:36:45.443 12159.692 - 12212.331: 96.0382% ( 9) 00:36:45.443 12212.331 - 12264.970: 96.0900% ( 7) 00:36:45.443 12264.970 - 12317.610: 96.1567% ( 9) 00:36:45.443 12317.610 - 12370.249: 96.2233% ( 9) 00:36:45.443 12370.249 - 12422.888: 96.2752% ( 7) 00:36:45.443 12422.888 - 12475.528: 96.3122% ( 5) 00:36:45.443 12475.528 - 12528.167: 96.3418% ( 4) 00:36:45.443 12528.167 - 12580.806: 96.3714% ( 4) 00:36:45.443 12580.806 - 12633.446: 96.3863% ( 2) 00:36:45.443 12633.446 - 12686.085: 96.4011% ( 2) 00:36:45.443 12686.085 - 12738.724: 96.4159% ( 2) 00:36:45.443 12738.724 - 12791.364: 96.4307% ( 2) 00:36:45.443 12791.364 - 12844.003: 96.4677% ( 5) 00:36:45.443 12844.003 - 12896.643: 96.5047% ( 5) 00:36:45.443 12896.643 - 12949.282: 96.5344% ( 4) 00:36:45.443 12949.282 - 13001.921: 96.5566% ( 3) 00:36:45.443 13001.921 - 13054.561: 96.5788% ( 3) 00:36:45.443 13054.561 - 13107.200: 96.6084% ( 4) 00:36:45.443 13107.200 - 13159.839: 96.6306% ( 3) 00:36:45.443 13159.839 - 13212.479: 96.6602% ( 4) 00:36:45.443 13212.479 - 13265.118: 96.6825% ( 3) 00:36:45.443 13265.118 - 13317.757: 96.7121% ( 4) 00:36:45.443 13317.757 - 13370.397: 96.7565% ( 6) 00:36:45.443 13370.397 - 13423.036: 96.7787% ( 3) 00:36:45.443 13423.036 - 13475.676: 96.8158% ( 5) 00:36:45.443 13475.676 - 13580.954: 96.8750% ( 8) 00:36:45.443 13580.954 - 13686.233: 96.9120% ( 5) 00:36:45.443 13686.233 - 13791.512: 96.9416% ( 4) 00:36:45.443 13791.512 - 13896.790: 96.9639% ( 3) 00:36:45.443 13896.790 - 14002.069: 96.9935% ( 4) 00:36:45.443 14002.069 - 14107.348: 97.0083% ( 2) 00:36:45.443 14107.348 - 14212.627: 97.0305% ( 3) 00:36:45.443 14212.627 - 14317.905: 97.0453% ( 2) 00:36:45.443 14317.905 - 14423.184: 97.1120% ( 9) 00:36:45.443 14423.184 - 14528.463: 97.1786% ( 9) 00:36:45.443 14528.463 - 14633.741: 97.2527% ( 10) 00:36:45.443 14633.741 - 14739.020: 97.3119% ( 8) 00:36:45.443 14739.020 - 14844.299: 97.3637% ( 7) 00:36:45.443 14844.299 - 14949.578: 97.4008% ( 5) 00:36:45.443 14949.578 - 15054.856: 97.4452% ( 6) 00:36:45.443 15054.856 - 15160.135: 97.4822% ( 5) 00:36:45.443 15160.135 - 15265.414: 97.5193% ( 5) 00:36:45.443 15265.414 - 15370.692: 97.5637% ( 6) 00:36:45.443 15370.692 - 15475.971: 97.6007% ( 5) 00:36:45.443 15475.971 - 15581.250: 97.6525% ( 7) 00:36:45.444 15581.250 - 15686.529: 97.6822% ( 4) 00:36:45.444 15686.529 - 15791.807: 97.7192% ( 5) 00:36:45.444 15791.807 - 15897.086: 97.7488% ( 4) 00:36:45.444 15897.086 - 16002.365: 97.7784% ( 4) 00:36:45.444 16002.365 - 16107.643: 97.8525% ( 10) 00:36:45.444 16107.643 - 16212.922: 97.9339% ( 11) 00:36:45.444 16212.922 - 16318.201: 98.0154% ( 11) 00:36:45.444 16318.201 - 16423.480: 98.1043% ( 12) 00:36:45.444 16423.480 - 16528.758: 98.1857% ( 11) 00:36:45.444 16528.758 - 16634.037: 98.2598% ( 10) 00:36:45.444 16634.037 - 16739.316: 98.3486% ( 12) 00:36:45.444 16739.316 - 16844.594: 98.4301% ( 11) 00:36:45.444 16844.594 - 16949.873: 98.5190% ( 12) 00:36:45.444 16949.873 - 17055.152: 98.5486% ( 4) 00:36:45.444 17055.152 - 17160.431: 98.5782% ( 4) 00:36:45.444 17370.988 - 17476.267: 98.6300% ( 7) 00:36:45.444 17476.267 - 17581.545: 98.6819% ( 7) 00:36:45.444 17581.545 - 17686.824: 98.7263% ( 6) 00:36:45.444 17686.824 - 17792.103: 98.7707% ( 6) 00:36:45.444 17792.103 - 17897.382: 98.8152% ( 6) 00:36:45.444 17897.382 - 18002.660: 98.8596% ( 6) 00:36:45.444 18002.660 - 18107.939: 98.9040% ( 6) 00:36:45.444 18107.939 - 18213.218: 98.9559% ( 7) 00:36:45.444 18213.218 - 18318.496: 99.0003% ( 6) 00:36:45.444 18318.496 - 18423.775: 99.0447% ( 6) 00:36:45.444 18423.775 - 18529.054: 99.0521% ( 1) 00:36:45.444 38953.124 - 39163.682: 99.0966% ( 6) 00:36:45.444 39163.682 - 39374.239: 99.1558% ( 8) 00:36:45.444 39374.239 - 39584.797: 99.2076% ( 7) 00:36:45.444 39584.797 - 39795.354: 99.2669% ( 8) 00:36:45.444 39795.354 - 40005.912: 99.3261% ( 8) 00:36:45.444 40005.912 - 40216.469: 99.3780% ( 7) 00:36:45.444 40216.469 - 40427.027: 99.4372% ( 8) 00:36:45.444 40427.027 - 40637.584: 99.4964% ( 8) 00:36:45.444 40637.584 - 40848.141: 99.5261% ( 4) 00:36:45.444 45480.405 - 45690.962: 99.5705% ( 6) 00:36:45.444 45690.962 - 45901.520: 99.6297% ( 8) 00:36:45.444 45901.520 - 46112.077: 99.6816% ( 7) 00:36:45.444 46112.077 - 46322.635: 99.7408% ( 8) 00:36:45.444 46322.635 - 46533.192: 99.8001% ( 8) 00:36:45.444 46533.192 - 46743.749: 99.8593% ( 8) 00:36:45.444 46743.749 - 46954.307: 99.9185% ( 8) 00:36:45.444 46954.307 - 47164.864: 99.9704% ( 7) 00:36:45.444 47164.864 - 47375.422: 100.0000% ( 4) 00:36:45.444 00:36:45.444 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:45.444 ============================================================================== 00:36:45.444 Range in us Cumulative IO count 00:36:45.444 8106.461 - 8159.100: 0.0074% ( 1) 00:36:45.444 8159.100 - 8211.740: 0.0815% ( 10) 00:36:45.444 8211.740 - 8264.379: 0.3555% ( 37) 00:36:45.444 8264.379 - 8317.018: 0.7998% ( 60) 00:36:45.444 8317.018 - 8369.658: 1.9180% ( 151) 00:36:45.444 8369.658 - 8422.297: 3.6508% ( 234) 00:36:45.444 8422.297 - 8474.937: 6.5684% ( 394) 00:36:45.444 8474.937 - 8527.576: 10.1007% ( 477) 00:36:45.444 8527.576 - 8580.215: 13.8255% ( 503) 00:36:45.444 8580.215 - 8632.855: 18.2168% ( 593) 00:36:45.444 8632.855 - 8685.494: 22.9191% ( 635) 00:36:45.444 8685.494 - 8738.133: 27.8880% ( 671) 00:36:45.444 8738.133 - 8790.773: 32.9384% ( 682) 00:36:45.444 8790.773 - 8843.412: 38.1887% ( 709) 00:36:45.444 8843.412 - 8896.051: 43.5204% ( 720) 00:36:45.444 8896.051 - 8948.691: 48.8300% ( 717) 00:36:45.444 8948.691 - 9001.330: 54.1395% ( 717) 00:36:45.444 9001.330 - 9053.969: 59.5898% ( 736) 00:36:45.444 9053.969 - 9106.609: 65.0770% ( 741) 00:36:45.444 9106.609 - 9159.248: 70.2384% ( 697) 00:36:45.444 9159.248 - 9211.888: 74.8149% ( 618) 00:36:45.444 9211.888 - 9264.527: 78.9174% ( 554) 00:36:45.444 9264.527 - 9317.166: 81.9683% ( 412) 00:36:45.444 9317.166 - 9369.806: 84.2861% ( 313) 00:36:45.444 9369.806 - 9422.445: 86.1893% ( 257) 00:36:45.444 9422.445 - 9475.084: 87.8110% ( 219) 00:36:45.444 9475.084 - 9527.724: 88.9440% ( 153) 00:36:45.444 9527.724 - 9580.363: 89.8178% ( 118) 00:36:45.444 9580.363 - 9633.002: 90.5880% ( 104) 00:36:45.444 9633.002 - 9685.642: 91.2248% ( 86) 00:36:45.444 9685.642 - 9738.281: 91.7654% ( 73) 00:36:45.444 9738.281 - 9790.920: 92.2319% ( 63) 00:36:45.444 9790.920 - 9843.560: 92.6244% ( 53) 00:36:45.444 9843.560 - 9896.199: 92.8762% ( 34) 00:36:45.444 9896.199 - 9948.839: 93.0687% ( 26) 00:36:45.444 9948.839 - 10001.478: 93.2761% ( 28) 00:36:45.444 10001.478 - 10054.117: 93.4242% ( 20) 00:36:45.444 10054.117 - 10106.757: 93.5723% ( 20) 00:36:45.444 10106.757 - 10159.396: 93.7278% ( 21) 00:36:45.444 10159.396 - 10212.035: 93.8759% ( 20) 00:36:45.444 10212.035 - 10264.675: 93.9722% ( 13) 00:36:45.444 10264.675 - 10317.314: 94.0758% ( 14) 00:36:45.444 10317.314 - 10369.953: 94.1573% ( 11) 00:36:45.444 10369.953 - 10422.593: 94.2313% ( 10) 00:36:45.444 10422.593 - 10475.232: 94.2758% ( 6) 00:36:45.444 10475.232 - 10527.871: 94.3054% ( 4) 00:36:45.444 10527.871 - 10580.511: 94.3350% ( 4) 00:36:45.444 10580.511 - 10633.150: 94.3572% ( 3) 00:36:45.444 10633.150 - 10685.790: 94.3943% ( 5) 00:36:45.444 10685.790 - 10738.429: 94.4165% ( 3) 00:36:45.444 10738.429 - 10791.068: 94.4387% ( 3) 00:36:45.444 10791.068 - 10843.708: 94.4757% ( 5) 00:36:45.444 10843.708 - 10896.347: 94.5350% ( 8) 00:36:45.444 10896.347 - 10948.986: 94.5794% ( 6) 00:36:45.444 10948.986 - 11001.626: 94.6238% ( 6) 00:36:45.444 11001.626 - 11054.265: 94.6831% ( 8) 00:36:45.444 11054.265 - 11106.904: 94.7349% ( 7) 00:36:45.444 11106.904 - 11159.544: 94.7719% ( 5) 00:36:45.444 11159.544 - 11212.183: 94.8238% ( 7) 00:36:45.444 11212.183 - 11264.822: 94.8756% ( 7) 00:36:45.444 11264.822 - 11317.462: 94.9274% ( 7) 00:36:45.444 11317.462 - 11370.101: 94.9719% ( 6) 00:36:45.444 11370.101 - 11422.741: 95.0237% ( 7) 00:36:45.444 11422.741 - 11475.380: 95.0681% ( 6) 00:36:45.444 11475.380 - 11528.019: 95.1200% ( 7) 00:36:45.444 11528.019 - 11580.659: 95.1570% ( 5) 00:36:45.444 11580.659 - 11633.298: 95.2162% ( 8) 00:36:45.444 11633.298 - 11685.937: 95.2533% ( 5) 00:36:45.444 11685.937 - 11738.577: 95.2977% ( 6) 00:36:45.444 11738.577 - 11791.216: 95.3421% ( 6) 00:36:45.444 11791.216 - 11843.855: 95.4014% ( 8) 00:36:45.444 11843.855 - 11896.495: 95.4532% ( 7) 00:36:45.444 11896.495 - 11949.134: 95.4976% ( 6) 00:36:45.444 11949.134 - 12001.773: 95.5569% ( 8) 00:36:45.444 12001.773 - 12054.413: 95.6013% ( 6) 00:36:45.444 12054.413 - 12107.052: 95.6531% ( 7) 00:36:45.444 12107.052 - 12159.692: 95.6828% ( 4) 00:36:45.444 12159.692 - 12212.331: 95.7198% ( 5) 00:36:45.444 12212.331 - 12264.970: 95.8161% ( 13) 00:36:45.444 12264.970 - 12317.610: 95.8753% ( 8) 00:36:45.444 12317.610 - 12370.249: 95.9123% ( 5) 00:36:45.444 12370.249 - 12422.888: 95.9568% ( 6) 00:36:45.444 12422.888 - 12475.528: 96.0012% ( 6) 00:36:45.444 12475.528 - 12528.167: 96.0456% ( 6) 00:36:45.444 12528.167 - 12580.806: 96.0975% ( 7) 00:36:45.444 12580.806 - 12633.446: 96.1419% ( 6) 00:36:45.444 12633.446 - 12686.085: 96.1789% ( 5) 00:36:45.444 12686.085 - 12738.724: 96.2233% ( 6) 00:36:45.444 12738.724 - 12791.364: 96.3048% ( 11) 00:36:45.444 12791.364 - 12844.003: 96.3789% ( 10) 00:36:45.444 12844.003 - 12896.643: 96.4233% ( 6) 00:36:45.444 12896.643 - 12949.282: 96.4899% ( 9) 00:36:45.444 12949.282 - 13001.921: 96.5566% ( 9) 00:36:45.444 13001.921 - 13054.561: 96.5862% ( 4) 00:36:45.444 13054.561 - 13107.200: 96.6158% ( 4) 00:36:45.444 13107.200 - 13159.839: 96.6306% ( 2) 00:36:45.444 13159.839 - 13212.479: 96.6602% ( 4) 00:36:45.444 13212.479 - 13265.118: 96.6825% ( 3) 00:36:45.444 13265.118 - 13317.757: 96.6973% ( 2) 00:36:45.444 13317.757 - 13370.397: 96.7269% ( 4) 00:36:45.444 13370.397 - 13423.036: 96.7565% ( 4) 00:36:45.444 13423.036 - 13475.676: 96.7861% ( 4) 00:36:45.444 13475.676 - 13580.954: 96.8380% ( 7) 00:36:45.444 13580.954 - 13686.233: 96.9342% ( 13) 00:36:45.444 13686.233 - 13791.512: 97.0231% ( 12) 00:36:45.444 13791.512 - 13896.790: 97.0898% ( 9) 00:36:45.444 13896.790 - 14002.069: 97.1416% ( 7) 00:36:45.444 14002.069 - 14107.348: 97.2008% ( 8) 00:36:45.444 14107.348 - 14212.627: 97.2675% ( 9) 00:36:45.444 14212.627 - 14317.905: 97.3415% ( 10) 00:36:45.444 14317.905 - 14423.184: 97.4082% ( 9) 00:36:45.444 14423.184 - 14528.463: 97.4748% ( 9) 00:36:45.444 14528.463 - 14633.741: 97.5489% ( 10) 00:36:45.444 14633.741 - 14739.020: 97.5933% ( 6) 00:36:45.444 14739.020 - 14844.299: 97.6748% ( 11) 00:36:45.444 14844.299 - 14949.578: 97.7118% ( 5) 00:36:45.444 14949.578 - 15054.856: 97.7192% ( 1) 00:36:45.444 15054.856 - 15160.135: 97.7414% ( 3) 00:36:45.444 15160.135 - 15265.414: 97.7562% ( 2) 00:36:45.444 15265.414 - 15370.692: 97.7784% ( 3) 00:36:45.444 15370.692 - 15475.971: 97.7932% ( 2) 00:36:45.444 15475.971 - 15581.250: 97.8081% ( 2) 00:36:45.444 15581.250 - 15686.529: 97.8303% ( 3) 00:36:45.444 15686.529 - 15791.807: 97.8451% ( 2) 00:36:45.444 15791.807 - 15897.086: 97.8673% ( 3) 00:36:45.444 15897.086 - 16002.365: 97.8895% ( 3) 00:36:45.444 16002.365 - 16107.643: 97.9117% ( 3) 00:36:45.444 16107.643 - 16212.922: 97.9339% ( 3) 00:36:45.444 16212.922 - 16318.201: 97.9488% ( 2) 00:36:45.444 16318.201 - 16423.480: 97.9710% ( 3) 00:36:45.444 16423.480 - 16528.758: 97.9858% ( 2) 00:36:45.444 16528.758 - 16634.037: 98.0302% ( 6) 00:36:45.444 16634.037 - 16739.316: 98.0969% ( 9) 00:36:45.444 16739.316 - 16844.594: 98.1635% ( 9) 00:36:45.444 16844.594 - 16949.873: 98.2302% ( 9) 00:36:45.444 16949.873 - 17055.152: 98.3486% ( 16) 00:36:45.444 17055.152 - 17160.431: 98.4597% ( 15) 00:36:45.444 17160.431 - 17265.709: 98.5560% ( 13) 00:36:45.444 17265.709 - 17370.988: 98.6523% ( 13) 00:36:45.444 17370.988 - 17476.267: 98.7485% ( 13) 00:36:45.445 17476.267 - 17581.545: 98.8448% ( 13) 00:36:45.445 17581.545 - 17686.824: 98.9336% ( 12) 00:36:45.445 17686.824 - 17792.103: 98.9929% ( 8) 00:36:45.445 17792.103 - 17897.382: 99.0447% ( 7) 00:36:45.445 17897.382 - 18002.660: 99.0521% ( 1) 00:36:45.445 37689.780 - 37900.337: 99.0966% ( 6) 00:36:45.445 37900.337 - 38110.895: 99.1558% ( 8) 00:36:45.445 38110.895 - 38321.452: 99.2076% ( 7) 00:36:45.445 38321.452 - 38532.010: 99.2595% ( 7) 00:36:45.445 38532.010 - 38742.567: 99.3187% ( 8) 00:36:45.445 38742.567 - 38953.124: 99.3780% ( 8) 00:36:45.445 38953.124 - 39163.682: 99.4372% ( 8) 00:36:45.445 39163.682 - 39374.239: 99.4890% ( 7) 00:36:45.445 39374.239 - 39584.797: 99.5261% ( 5) 00:36:45.445 43795.945 - 44006.503: 99.5409% ( 2) 00:36:45.445 44006.503 - 44217.060: 99.5927% ( 7) 00:36:45.445 44217.060 - 44427.618: 99.6445% ( 7) 00:36:45.445 44427.618 - 44638.175: 99.6964% ( 7) 00:36:45.445 44638.175 - 44848.733: 99.7556% ( 8) 00:36:45.445 44848.733 - 45059.290: 99.8075% ( 7) 00:36:45.445 45059.290 - 45269.847: 99.8593% ( 7) 00:36:45.445 45269.847 - 45480.405: 99.9111% ( 7) 00:36:45.445 45480.405 - 45690.962: 99.9704% ( 8) 00:36:45.445 45690.962 - 45901.520: 100.0000% ( 4) 00:36:45.445 00:36:45.445 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:45.445 ============================================================================== 00:36:45.445 Range in us Cumulative IO count 00:36:45.445 8106.461 - 8159.100: 0.0222% ( 3) 00:36:45.445 8159.100 - 8211.740: 0.1333% ( 15) 00:36:45.445 8211.740 - 8264.379: 0.3258% ( 26) 00:36:45.445 8264.379 - 8317.018: 0.7850% ( 62) 00:36:45.445 8317.018 - 8369.658: 1.6217% ( 113) 00:36:45.445 8369.658 - 8422.297: 3.5619% ( 262) 00:36:45.445 8422.297 - 8474.937: 6.5092% ( 398) 00:36:45.445 8474.937 - 8527.576: 10.0489% ( 478) 00:36:45.445 8527.576 - 8580.215: 13.9366% ( 525) 00:36:45.445 8580.215 - 8632.855: 18.2464% ( 582) 00:36:45.445 8632.855 - 8685.494: 23.0228% ( 645) 00:36:45.445 8685.494 - 8738.133: 28.2213% ( 702) 00:36:45.445 8738.133 - 8790.773: 33.1976% ( 672) 00:36:45.445 8790.773 - 8843.412: 38.5071% ( 717) 00:36:45.445 8843.412 - 8896.051: 43.8241% ( 718) 00:36:45.445 8896.051 - 8948.691: 49.1558% ( 720) 00:36:45.445 8948.691 - 9001.330: 54.5172% ( 724) 00:36:45.445 9001.330 - 9053.969: 59.8860% ( 725) 00:36:45.445 9053.969 - 9106.609: 65.1585% ( 712) 00:36:45.445 9106.609 - 9159.248: 70.1496% ( 674) 00:36:45.445 9159.248 - 9211.888: 74.8149% ( 630) 00:36:45.445 9211.888 - 9264.527: 78.7396% ( 530) 00:36:45.445 9264.527 - 9317.166: 81.7906% ( 412) 00:36:45.445 9317.166 - 9369.806: 84.1973% ( 325) 00:36:45.445 9369.806 - 9422.445: 86.1226% ( 260) 00:36:45.445 9422.445 - 9475.084: 87.6555% ( 207) 00:36:45.445 9475.084 - 9527.724: 88.8107% ( 156) 00:36:45.445 9527.724 - 9580.363: 89.7142% ( 122) 00:36:45.445 9580.363 - 9633.002: 90.3954% ( 92) 00:36:45.445 9633.002 - 9685.642: 90.9953% ( 81) 00:36:45.445 9685.642 - 9738.281: 91.4544% ( 62) 00:36:45.445 9738.281 - 9790.920: 91.8395% ( 52) 00:36:45.445 9790.920 - 9843.560: 92.1949% ( 48) 00:36:45.445 9843.560 - 9896.199: 92.5281% ( 45) 00:36:45.445 9896.199 - 9948.839: 92.8169% ( 39) 00:36:45.445 9948.839 - 10001.478: 93.1206% ( 41) 00:36:45.445 10001.478 - 10054.117: 93.3353% ( 29) 00:36:45.445 10054.117 - 10106.757: 93.4390% ( 14) 00:36:45.445 10106.757 - 10159.396: 93.5649% ( 17) 00:36:45.445 10159.396 - 10212.035: 93.6759% ( 15) 00:36:45.445 10212.035 - 10264.675: 93.8018% ( 17) 00:36:45.445 10264.675 - 10317.314: 93.9203% ( 16) 00:36:45.445 10317.314 - 10369.953: 94.0388% ( 16) 00:36:45.445 10369.953 - 10422.593: 94.1277% ( 12) 00:36:45.445 10422.593 - 10475.232: 94.2165% ( 12) 00:36:45.445 10475.232 - 10527.871: 94.2832% ( 9) 00:36:45.445 10527.871 - 10580.511: 94.3572% ( 10) 00:36:45.445 10580.511 - 10633.150: 94.4313% ( 10) 00:36:45.445 10633.150 - 10685.790: 94.5275% ( 13) 00:36:45.445 10685.790 - 10738.429: 94.6090% ( 11) 00:36:45.445 10738.429 - 10791.068: 94.6831% ( 10) 00:36:45.445 10791.068 - 10843.708: 94.7201% ( 5) 00:36:45.445 10843.708 - 10896.347: 94.7571% ( 5) 00:36:45.445 10896.347 - 10948.986: 94.7941% ( 5) 00:36:45.445 10948.986 - 11001.626: 94.8312% ( 5) 00:36:45.445 11001.626 - 11054.265: 94.8608% ( 4) 00:36:45.445 11054.265 - 11106.904: 94.8978% ( 5) 00:36:45.445 11106.904 - 11159.544: 94.9274% ( 4) 00:36:45.445 11159.544 - 11212.183: 94.9570% ( 4) 00:36:45.445 11212.183 - 11264.822: 95.0015% ( 6) 00:36:45.445 11264.822 - 11317.462: 95.0533% ( 7) 00:36:45.445 11317.462 - 11370.101: 95.0903% ( 5) 00:36:45.445 11370.101 - 11422.741: 95.1274% ( 5) 00:36:45.445 11422.741 - 11475.380: 95.1570% ( 4) 00:36:45.445 11475.380 - 11528.019: 95.1940% ( 5) 00:36:45.445 11528.019 - 11580.659: 95.2236% ( 4) 00:36:45.445 11580.659 - 11633.298: 95.2681% ( 6) 00:36:45.445 11633.298 - 11685.937: 95.2977% ( 4) 00:36:45.445 11685.937 - 11738.577: 95.3347% ( 5) 00:36:45.445 11738.577 - 11791.216: 95.3717% ( 5) 00:36:45.445 11791.216 - 11843.855: 95.4088% ( 5) 00:36:45.445 11843.855 - 11896.495: 95.4384% ( 4) 00:36:45.445 11896.495 - 11949.134: 95.4828% ( 6) 00:36:45.445 11949.134 - 12001.773: 95.5050% ( 3) 00:36:45.445 12001.773 - 12054.413: 95.5273% ( 3) 00:36:45.445 12054.413 - 12107.052: 95.5495% ( 3) 00:36:45.445 12107.052 - 12159.692: 95.5569% ( 1) 00:36:45.445 12159.692 - 12212.331: 95.5717% ( 2) 00:36:45.445 12212.331 - 12264.970: 95.5939% ( 3) 00:36:45.445 12264.970 - 12317.610: 95.6235% ( 4) 00:36:45.445 12317.610 - 12370.249: 95.6457% ( 3) 00:36:45.445 12370.249 - 12422.888: 95.6828% ( 5) 00:36:45.445 12422.888 - 12475.528: 95.7124% ( 4) 00:36:45.445 12475.528 - 12528.167: 95.7494% ( 5) 00:36:45.445 12528.167 - 12580.806: 95.7790% ( 4) 00:36:45.445 12580.806 - 12633.446: 95.8012% ( 3) 00:36:45.445 12633.446 - 12686.085: 95.8457% ( 6) 00:36:45.445 12686.085 - 12738.724: 95.8679% ( 3) 00:36:45.445 12738.724 - 12791.364: 95.8753% ( 1) 00:36:45.445 12791.364 - 12844.003: 95.8901% ( 2) 00:36:45.445 12844.003 - 12896.643: 95.9049% ( 2) 00:36:45.445 12896.643 - 12949.282: 95.9568% ( 7) 00:36:45.445 12949.282 - 13001.921: 95.9864% ( 4) 00:36:45.445 13001.921 - 13054.561: 96.0234% ( 5) 00:36:45.445 13054.561 - 13107.200: 96.1567% ( 18) 00:36:45.445 13107.200 - 13159.839: 96.2011% ( 6) 00:36:45.445 13159.839 - 13212.479: 96.2752% ( 10) 00:36:45.445 13212.479 - 13265.118: 96.3640% ( 12) 00:36:45.445 13265.118 - 13317.757: 96.4455% ( 11) 00:36:45.445 13317.757 - 13370.397: 96.5121% ( 9) 00:36:45.445 13370.397 - 13423.036: 96.5936% ( 11) 00:36:45.445 13423.036 - 13475.676: 96.6751% ( 11) 00:36:45.445 13475.676 - 13580.954: 96.7935% ( 16) 00:36:45.445 13580.954 - 13686.233: 96.9491% ( 21) 00:36:45.445 13686.233 - 13791.512: 97.1046% ( 21) 00:36:45.445 13791.512 - 13896.790: 97.2453% ( 19) 00:36:45.445 13896.790 - 14002.069: 97.3489% ( 14) 00:36:45.445 14002.069 - 14107.348: 97.4526% ( 14) 00:36:45.445 14107.348 - 14212.627: 97.5563% ( 14) 00:36:45.445 14212.627 - 14317.905: 97.6081% ( 7) 00:36:45.445 14317.905 - 14423.184: 97.6303% ( 3) 00:36:45.445 14844.299 - 14949.578: 97.6822% ( 7) 00:36:45.445 14949.578 - 15054.856: 97.7192% ( 5) 00:36:45.445 15054.856 - 15160.135: 97.7340% ( 2) 00:36:45.445 15160.135 - 15265.414: 97.7414% ( 1) 00:36:45.445 15265.414 - 15370.692: 97.7636% ( 3) 00:36:45.445 15370.692 - 15475.971: 97.8081% ( 6) 00:36:45.445 15475.971 - 15581.250: 97.8229% ( 2) 00:36:45.445 15581.250 - 15686.529: 97.8525% ( 4) 00:36:45.445 15686.529 - 15791.807: 97.8747% ( 3) 00:36:45.445 15791.807 - 15897.086: 97.9043% ( 4) 00:36:45.445 15897.086 - 16002.365: 97.9191% ( 2) 00:36:45.445 16002.365 - 16107.643: 97.9710% ( 7) 00:36:45.445 16107.643 - 16212.922: 98.0376% ( 9) 00:36:45.445 16212.922 - 16318.201: 98.1043% ( 9) 00:36:45.445 16318.201 - 16423.480: 98.1783% ( 10) 00:36:45.445 16423.480 - 16528.758: 98.2450% ( 9) 00:36:45.445 16528.758 - 16634.037: 98.3264% ( 11) 00:36:45.445 16634.037 - 16739.316: 98.3931% ( 9) 00:36:45.445 16739.316 - 16844.594: 98.4449% ( 7) 00:36:45.445 16844.594 - 16949.873: 98.4819% ( 5) 00:36:45.445 16949.873 - 17055.152: 98.5338% ( 7) 00:36:45.445 17055.152 - 17160.431: 98.5782% ( 6) 00:36:45.445 17581.545 - 17686.824: 98.5930% ( 2) 00:36:45.445 17686.824 - 17792.103: 98.6448% ( 7) 00:36:45.445 17792.103 - 17897.382: 98.6967% ( 7) 00:36:45.445 17897.382 - 18002.660: 98.7485% ( 7) 00:36:45.445 18002.660 - 18107.939: 98.8004% ( 7) 00:36:45.445 18107.939 - 18213.218: 98.8596% ( 8) 00:36:45.445 18213.218 - 18318.496: 98.9114% ( 7) 00:36:45.445 18318.496 - 18423.775: 98.9633% ( 7) 00:36:45.445 18423.775 - 18529.054: 99.0151% ( 7) 00:36:45.445 18529.054 - 18634.333: 99.0521% ( 5) 00:36:45.445 35794.763 - 36005.320: 99.0966% ( 6) 00:36:45.445 36005.320 - 36215.878: 99.1558% ( 8) 00:36:45.445 36215.878 - 36426.435: 99.2150% ( 8) 00:36:45.445 36426.435 - 36636.993: 99.2743% ( 8) 00:36:45.445 36636.993 - 36847.550: 99.3261% ( 7) 00:36:45.445 36847.550 - 37058.108: 99.3854% ( 8) 00:36:45.445 37058.108 - 37268.665: 99.4372% ( 7) 00:36:45.445 37268.665 - 37479.222: 99.4964% ( 8) 00:36:45.445 37479.222 - 37689.780: 99.5261% ( 4) 00:36:45.445 41900.929 - 42111.486: 99.5631% ( 5) 00:36:45.445 42111.486 - 42322.043: 99.6223% ( 8) 00:36:45.445 42322.043 - 42532.601: 99.6816% ( 8) 00:36:45.445 42532.601 - 42743.158: 99.7408% ( 8) 00:36:45.445 42743.158 - 42953.716: 99.8001% ( 8) 00:36:45.445 42953.716 - 43164.273: 99.8593% ( 8) 00:36:45.445 43164.273 - 43374.831: 99.9111% ( 7) 00:36:45.445 43374.831 - 43585.388: 99.9778% ( 9) 00:36:45.445 43585.388 - 43795.945: 100.0000% ( 3) 00:36:45.446 00:36:45.446 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:45.446 ============================================================================== 00:36:45.446 Range in us Cumulative IO count 00:36:45.446 8106.461 - 8159.100: 0.0222% ( 3) 00:36:45.446 8159.100 - 8211.740: 0.1407% ( 16) 00:36:45.446 8211.740 - 8264.379: 0.3184% ( 24) 00:36:45.446 8264.379 - 8317.018: 0.8664% ( 74) 00:36:45.446 8317.018 - 8369.658: 1.8365% ( 131) 00:36:45.446 8369.658 - 8422.297: 3.7767% ( 262) 00:36:45.446 8422.297 - 8474.937: 6.5462% ( 374) 00:36:45.446 8474.937 - 8527.576: 10.1525% ( 487) 00:36:45.446 8527.576 - 8580.215: 13.9736% ( 516) 00:36:45.446 8580.215 - 8632.855: 18.3871% ( 596) 00:36:45.446 8632.855 - 8685.494: 22.9858% ( 621) 00:36:45.446 8685.494 - 8738.133: 28.0361% ( 682) 00:36:45.446 8738.133 - 8790.773: 33.3161% ( 713) 00:36:45.446 8790.773 - 8843.412: 38.3960% ( 686) 00:36:45.446 8843.412 - 8896.051: 43.7796% ( 727) 00:36:45.446 8896.051 - 8948.691: 49.2002% ( 732) 00:36:45.446 8948.691 - 9001.330: 54.5394% ( 721) 00:36:45.446 9001.330 - 9053.969: 59.9600% ( 732) 00:36:45.446 9053.969 - 9106.609: 65.2177% ( 710) 00:36:45.446 9106.609 - 9159.248: 70.3717% ( 696) 00:36:45.446 9159.248 - 9211.888: 75.0592% ( 633) 00:36:45.446 9211.888 - 9264.527: 78.8877% ( 517) 00:36:45.446 9264.527 - 9317.166: 81.8646% ( 402) 00:36:45.446 9317.166 - 9369.806: 84.2269% ( 319) 00:36:45.446 9369.806 - 9422.445: 86.0930% ( 252) 00:36:45.446 9422.445 - 9475.084: 87.5963% ( 203) 00:36:45.446 9475.084 - 9527.724: 88.7959% ( 162) 00:36:45.446 9527.724 - 9580.363: 89.6993% ( 122) 00:36:45.446 9580.363 - 9633.002: 90.2770% ( 78) 00:36:45.446 9633.002 - 9685.642: 90.8398% ( 76) 00:36:45.446 9685.642 - 9738.281: 91.3063% ( 63) 00:36:45.446 9738.281 - 9790.920: 91.6988% ( 53) 00:36:45.446 9790.920 - 9843.560: 92.0542% ( 48) 00:36:45.446 9843.560 - 9896.199: 92.4245% ( 50) 00:36:45.446 9896.199 - 9948.839: 92.7355% ( 42) 00:36:45.446 9948.839 - 10001.478: 92.9428% ( 28) 00:36:45.446 10001.478 - 10054.117: 93.1057% ( 22) 00:36:45.446 10054.117 - 10106.757: 93.2539% ( 20) 00:36:45.446 10106.757 - 10159.396: 93.3649% ( 15) 00:36:45.446 10159.396 - 10212.035: 93.4908% ( 17) 00:36:45.446 10212.035 - 10264.675: 93.5797% ( 12) 00:36:45.446 10264.675 - 10317.314: 93.6759% ( 13) 00:36:45.446 10317.314 - 10369.953: 93.7722% ( 13) 00:36:45.446 10369.953 - 10422.593: 93.8685% ( 13) 00:36:45.446 10422.593 - 10475.232: 93.9796% ( 15) 00:36:45.446 10475.232 - 10527.871: 94.0684% ( 12) 00:36:45.446 10527.871 - 10580.511: 94.1573% ( 12) 00:36:45.446 10580.511 - 10633.150: 94.2387% ( 11) 00:36:45.446 10633.150 - 10685.790: 94.3350% ( 13) 00:36:45.446 10685.790 - 10738.429: 94.4313% ( 13) 00:36:45.446 10738.429 - 10791.068: 94.4979% ( 9) 00:36:45.446 10791.068 - 10843.708: 94.5720% ( 10) 00:36:45.446 10843.708 - 10896.347: 94.6386% ( 9) 00:36:45.446 10896.347 - 10948.986: 94.7201% ( 11) 00:36:45.446 10948.986 - 11001.626: 94.7867% ( 9) 00:36:45.446 11001.626 - 11054.265: 94.8608% ( 10) 00:36:45.446 11054.265 - 11106.904: 94.9200% ( 8) 00:36:45.446 11106.904 - 11159.544: 94.9570% ( 5) 00:36:45.446 11159.544 - 11212.183: 94.9941% ( 5) 00:36:45.446 11212.183 - 11264.822: 95.0311% ( 5) 00:36:45.446 11264.822 - 11317.462: 95.0681% ( 5) 00:36:45.446 11317.462 - 11370.101: 95.1052% ( 5) 00:36:45.446 11370.101 - 11422.741: 95.1348% ( 4) 00:36:45.446 11422.741 - 11475.380: 95.1570% ( 3) 00:36:45.446 11475.380 - 11528.019: 95.1718% ( 2) 00:36:45.446 11528.019 - 11580.659: 95.2014% ( 4) 00:36:45.446 11580.659 - 11633.298: 95.2310% ( 4) 00:36:45.446 11633.298 - 11685.937: 95.2681% ( 5) 00:36:45.446 11685.937 - 11738.577: 95.3051% ( 5) 00:36:45.446 11738.577 - 11791.216: 95.3347% ( 4) 00:36:45.446 11791.216 - 11843.855: 95.3569% ( 3) 00:36:45.446 11843.855 - 11896.495: 95.3717% ( 2) 00:36:45.446 11896.495 - 11949.134: 95.4088% ( 5) 00:36:45.446 11949.134 - 12001.773: 95.4310% ( 3) 00:36:45.446 12001.773 - 12054.413: 95.4754% ( 6) 00:36:45.446 12054.413 - 12107.052: 95.5124% ( 5) 00:36:45.446 12107.052 - 12159.692: 95.5569% ( 6) 00:36:45.446 12159.692 - 12212.331: 95.6087% ( 7) 00:36:45.446 12212.331 - 12264.970: 95.6605% ( 7) 00:36:45.446 12264.970 - 12317.610: 95.7050% ( 6) 00:36:45.446 12317.610 - 12370.249: 95.7494% ( 6) 00:36:45.446 12370.249 - 12422.888: 95.8012% ( 7) 00:36:45.446 12422.888 - 12475.528: 95.8531% ( 7) 00:36:45.446 12475.528 - 12528.167: 95.9049% ( 7) 00:36:45.446 12528.167 - 12580.806: 95.9493% ( 6) 00:36:45.446 12580.806 - 12633.446: 96.0012% ( 7) 00:36:45.446 12633.446 - 12686.085: 96.0382% ( 5) 00:36:45.446 12686.085 - 12738.724: 96.0975% ( 8) 00:36:45.446 12738.724 - 12791.364: 96.1419% ( 6) 00:36:45.446 12791.364 - 12844.003: 96.1789% ( 5) 00:36:45.446 12844.003 - 12896.643: 96.2159% ( 5) 00:36:45.446 12896.643 - 12949.282: 96.2678% ( 7) 00:36:45.446 12949.282 - 13001.921: 96.3048% ( 5) 00:36:45.446 13001.921 - 13054.561: 96.3492% ( 6) 00:36:45.446 13054.561 - 13107.200: 96.3789% ( 4) 00:36:45.446 13107.200 - 13159.839: 96.4085% ( 4) 00:36:45.446 13159.839 - 13212.479: 96.4381% ( 4) 00:36:45.446 13212.479 - 13265.118: 96.4603% ( 3) 00:36:45.446 13265.118 - 13317.757: 96.4899% ( 4) 00:36:45.446 13317.757 - 13370.397: 96.5270% ( 5) 00:36:45.446 13370.397 - 13423.036: 96.5714% ( 6) 00:36:45.446 13423.036 - 13475.676: 96.6158% ( 6) 00:36:45.446 13475.676 - 13580.954: 96.6973% ( 11) 00:36:45.446 13580.954 - 13686.233: 96.7491% ( 7) 00:36:45.446 13686.233 - 13791.512: 96.8084% ( 8) 00:36:45.446 13791.512 - 13896.790: 96.9046% ( 13) 00:36:45.446 13896.790 - 14002.069: 96.9861% ( 11) 00:36:45.446 14002.069 - 14107.348: 97.0675% ( 11) 00:36:45.446 14107.348 - 14212.627: 97.1490% ( 11) 00:36:45.446 14212.627 - 14317.905: 97.2379% ( 12) 00:36:45.446 14317.905 - 14423.184: 97.3415% ( 14) 00:36:45.446 14423.184 - 14528.463: 97.4452% ( 14) 00:36:45.446 14528.463 - 14633.741: 97.5341% ( 12) 00:36:45.446 14633.741 - 14739.020: 97.6525% ( 16) 00:36:45.446 14739.020 - 14844.299: 97.7118% ( 8) 00:36:45.446 14844.299 - 14949.578: 97.7710% ( 8) 00:36:45.446 14949.578 - 15054.856: 97.8229% ( 7) 00:36:45.446 15054.856 - 15160.135: 97.8451% ( 3) 00:36:45.446 15160.135 - 15265.414: 97.8747% ( 4) 00:36:45.446 15265.414 - 15370.692: 97.8969% ( 3) 00:36:45.446 15370.692 - 15475.971: 97.9191% ( 3) 00:36:45.446 15475.971 - 15581.250: 97.9710% ( 7) 00:36:45.446 15581.250 - 15686.529: 98.0376% ( 9) 00:36:45.446 15686.529 - 15791.807: 98.0895% ( 7) 00:36:45.446 15791.807 - 15897.086: 98.1413% ( 7) 00:36:45.446 15897.086 - 16002.365: 98.2005% ( 8) 00:36:45.446 16002.365 - 16107.643: 98.2746% ( 10) 00:36:45.446 16107.643 - 16212.922: 98.3338% ( 8) 00:36:45.446 16212.922 - 16318.201: 98.3931% ( 8) 00:36:45.446 16318.201 - 16423.480: 98.4375% ( 6) 00:36:45.446 16423.480 - 16528.758: 98.4819% ( 6) 00:36:45.446 16528.758 - 16634.037: 98.5190% ( 5) 00:36:45.446 16634.037 - 16739.316: 98.5634% ( 6) 00:36:45.446 16739.316 - 16844.594: 98.5782% ( 2) 00:36:45.446 18423.775 - 18529.054: 98.6004% ( 3) 00:36:45.446 18529.054 - 18634.333: 98.6523% ( 7) 00:36:45.446 18634.333 - 18739.611: 98.6967% ( 6) 00:36:45.446 18739.611 - 18844.890: 98.7411% ( 6) 00:36:45.446 18844.890 - 18950.169: 98.7855% ( 6) 00:36:45.446 18950.169 - 19055.447: 98.8152% ( 4) 00:36:45.446 19055.447 - 19160.726: 98.8448% ( 4) 00:36:45.446 19160.726 - 19266.005: 98.8966% ( 7) 00:36:45.446 19266.005 - 19371.284: 98.9411% ( 6) 00:36:45.446 19371.284 - 19476.562: 98.9855% ( 6) 00:36:45.446 19476.562 - 19581.841: 99.0373% ( 7) 00:36:45.446 19581.841 - 19687.120: 99.0521% ( 2) 00:36:45.446 33689.189 - 33899.746: 99.1040% ( 7) 00:36:45.446 33899.746 - 34110.304: 99.1632% ( 8) 00:36:45.446 34110.304 - 34320.861: 99.2150% ( 7) 00:36:45.446 34320.861 - 34531.418: 99.2743% ( 8) 00:36:45.446 34531.418 - 34741.976: 99.3335% ( 8) 00:36:45.446 34741.976 - 34952.533: 99.3854% ( 7) 00:36:45.447 34952.533 - 35163.091: 99.4446% ( 8) 00:36:45.447 35163.091 - 35373.648: 99.5039% ( 8) 00:36:45.447 35373.648 - 35584.206: 99.5261% ( 3) 00:36:45.447 39795.354 - 40005.912: 99.5557% ( 4) 00:36:45.447 40005.912 - 40216.469: 99.6149% ( 8) 00:36:45.447 40216.469 - 40427.027: 99.6668% ( 7) 00:36:45.447 40427.027 - 40637.584: 99.7186% ( 7) 00:36:45.447 40637.584 - 40848.141: 99.7704% ( 7) 00:36:45.447 40848.141 - 41058.699: 99.8223% ( 7) 00:36:45.447 41058.699 - 41269.256: 99.8815% ( 8) 00:36:45.447 41269.256 - 41479.814: 99.9334% ( 7) 00:36:45.447 41479.814 - 41690.371: 99.9852% ( 7) 00:36:45.447 41690.371 - 41900.929: 100.0000% ( 2) 00:36:45.447 00:36:45.447 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:45.447 ============================================================================== 00:36:45.447 Range in us Cumulative IO count 00:36:45.447 8106.461 - 8159.100: 0.0516% ( 7) 00:36:45.447 8159.100 - 8211.740: 0.1327% ( 11) 00:36:45.447 8211.740 - 8264.379: 0.3685% ( 32) 00:36:45.447 8264.379 - 8317.018: 0.9213% ( 75) 00:36:45.447 8317.018 - 8369.658: 1.9752% ( 143) 00:36:45.447 8369.658 - 8422.297: 3.6262% ( 224) 00:36:45.447 8422.297 - 8474.937: 6.7143% ( 419) 00:36:45.447 8474.937 - 8527.576: 9.9941% ( 445) 00:36:45.447 8527.576 - 8580.215: 14.0994% ( 557) 00:36:45.447 8580.215 - 8632.855: 18.4552% ( 591) 00:36:45.447 8632.855 - 8685.494: 23.0616% ( 625) 00:36:45.447 8685.494 - 8738.133: 28.0292% ( 674) 00:36:45.447 8738.133 - 8790.773: 33.1884% ( 700) 00:36:45.447 8790.773 - 8843.412: 38.4139% ( 709) 00:36:45.447 8843.412 - 8896.051: 43.7942% ( 730) 00:36:45.447 8896.051 - 8948.691: 49.1893% ( 732) 00:36:45.447 8948.691 - 9001.330: 54.5032% ( 721) 00:36:45.447 9001.330 - 9053.969: 59.7877% ( 717) 00:36:45.447 9053.969 - 9106.609: 65.1238% ( 724) 00:36:45.447 9106.609 - 9159.248: 70.1135% ( 677) 00:36:45.447 9159.248 - 9211.888: 74.8673% ( 645) 00:36:45.447 9211.888 - 9264.527: 78.7662% ( 529) 00:36:45.447 9264.527 - 9317.166: 81.6627% ( 393) 00:36:45.447 9317.166 - 9369.806: 84.0139% ( 319) 00:36:45.447 9369.806 - 9422.445: 85.8491% ( 249) 00:36:45.447 9422.445 - 9475.084: 87.3600% ( 205) 00:36:45.447 9475.084 - 9527.724: 88.5024% ( 155) 00:36:45.447 9527.724 - 9580.363: 89.2762% ( 105) 00:36:45.447 9580.363 - 9633.002: 89.8880% ( 83) 00:36:45.447 9633.002 - 9685.642: 90.4113% ( 71) 00:36:45.447 9685.642 - 9738.281: 90.8682% ( 62) 00:36:45.447 9738.281 - 9790.920: 91.2588% ( 53) 00:36:45.447 9790.920 - 9843.560: 91.6347% ( 51) 00:36:45.447 9843.560 - 9896.199: 91.9664% ( 45) 00:36:45.447 9896.199 - 9948.839: 92.2538% ( 39) 00:36:45.447 9948.839 - 10001.478: 92.5192% ( 36) 00:36:45.447 10001.478 - 10054.117: 92.6887% ( 23) 00:36:45.447 10054.117 - 10106.757: 92.8729% ( 25) 00:36:45.447 10106.757 - 10159.396: 93.0498% ( 24) 00:36:45.447 10159.396 - 10212.035: 93.1972% ( 20) 00:36:45.447 10212.035 - 10264.675: 93.3373% ( 19) 00:36:45.447 10264.675 - 10317.314: 93.4552% ( 16) 00:36:45.447 10317.314 - 10369.953: 93.6026% ( 20) 00:36:45.447 10369.953 - 10422.593: 93.7500% ( 20) 00:36:45.447 10422.593 - 10475.232: 93.8679% ( 16) 00:36:45.447 10475.232 - 10527.871: 93.9858% ( 16) 00:36:45.447 10527.871 - 10580.511: 94.0890% ( 14) 00:36:45.447 10580.511 - 10633.150: 94.1701% ( 11) 00:36:45.447 10633.150 - 10685.790: 94.2438% ( 10) 00:36:45.447 10685.790 - 10738.429: 94.3101% ( 9) 00:36:45.447 10738.429 - 10791.068: 94.3765% ( 9) 00:36:45.447 10791.068 - 10843.708: 94.4428% ( 9) 00:36:45.447 10843.708 - 10896.347: 94.5239% ( 11) 00:36:45.447 10896.347 - 10948.986: 94.5755% ( 7) 00:36:45.447 10948.986 - 11001.626: 94.6123% ( 5) 00:36:45.447 11001.626 - 11054.265: 94.6639% ( 7) 00:36:45.447 11054.265 - 11106.904: 94.7155% ( 7) 00:36:45.447 11106.904 - 11159.544: 94.7597% ( 6) 00:36:45.447 11159.544 - 11212.183: 94.8040% ( 6) 00:36:45.447 11212.183 - 11264.822: 94.8555% ( 7) 00:36:45.447 11264.822 - 11317.462: 94.9071% ( 7) 00:36:45.447 11317.462 - 11370.101: 94.9514% ( 6) 00:36:45.447 11370.101 - 11422.741: 95.0029% ( 7) 00:36:45.447 11422.741 - 11475.380: 95.0545% ( 7) 00:36:45.447 11475.380 - 11528.019: 95.1356% ( 11) 00:36:45.447 11528.019 - 11580.659: 95.1872% ( 7) 00:36:45.447 11580.659 - 11633.298: 95.2019% ( 2) 00:36:45.447 11633.298 - 11685.937: 95.2241% ( 3) 00:36:45.447 11685.937 - 11738.577: 95.2388% ( 2) 00:36:45.447 11738.577 - 11791.216: 95.2609% ( 3) 00:36:45.447 11791.216 - 11843.855: 95.2978% ( 5) 00:36:45.447 11843.855 - 11896.495: 95.3346% ( 5) 00:36:45.447 11896.495 - 11949.134: 95.4157% ( 11) 00:36:45.447 11949.134 - 12001.773: 95.4968% ( 11) 00:36:45.447 12001.773 - 12054.413: 95.5557% ( 8) 00:36:45.447 12054.413 - 12107.052: 95.6368% ( 11) 00:36:45.447 12107.052 - 12159.692: 95.6884% ( 7) 00:36:45.447 12159.692 - 12212.331: 95.7473% ( 8) 00:36:45.447 12212.331 - 12264.970: 95.7989% ( 7) 00:36:45.447 12264.970 - 12317.610: 95.8505% ( 7) 00:36:45.447 12317.610 - 12370.249: 95.8874% ( 5) 00:36:45.447 12370.249 - 12422.888: 95.9316% ( 6) 00:36:45.447 12422.888 - 12475.528: 95.9832% ( 7) 00:36:45.447 12475.528 - 12528.167: 96.0200% ( 5) 00:36:45.447 12528.167 - 12580.806: 96.0790% ( 8) 00:36:45.447 12580.806 - 12633.446: 96.1159% ( 5) 00:36:45.447 12633.446 - 12686.085: 96.1453% ( 4) 00:36:45.447 12686.085 - 12738.724: 96.1969% ( 7) 00:36:45.447 12738.724 - 12791.364: 96.2485% ( 7) 00:36:45.447 12791.364 - 12844.003: 96.3001% ( 7) 00:36:45.447 12844.003 - 12896.643: 96.3517% ( 7) 00:36:45.447 12896.643 - 12949.282: 96.3959% ( 6) 00:36:45.447 12949.282 - 13001.921: 96.4475% ( 7) 00:36:45.447 13001.921 - 13054.561: 96.4917% ( 6) 00:36:45.447 13054.561 - 13107.200: 96.5507% ( 8) 00:36:45.447 13107.200 - 13159.839: 96.6023% ( 7) 00:36:45.447 13159.839 - 13212.479: 96.6318% ( 4) 00:36:45.447 13212.479 - 13265.118: 96.6613% ( 4) 00:36:45.447 13265.118 - 13317.757: 96.6834% ( 3) 00:36:45.447 13317.757 - 13370.397: 96.6907% ( 1) 00:36:45.447 13370.397 - 13423.036: 96.6981% ( 1) 00:36:45.447 13896.790 - 14002.069: 96.7644% ( 9) 00:36:45.447 14002.069 - 14107.348: 96.7866% ( 3) 00:36:45.447 14107.348 - 14212.627: 96.8529% ( 9) 00:36:45.447 14212.627 - 14317.905: 96.8897% ( 5) 00:36:45.447 14317.905 - 14423.184: 96.9487% ( 8) 00:36:45.447 14423.184 - 14528.463: 97.0593% ( 15) 00:36:45.447 14528.463 - 14633.741: 97.1624% ( 14) 00:36:45.447 14633.741 - 14739.020: 97.2730% ( 15) 00:36:45.447 14739.020 - 14844.299: 97.3762% ( 14) 00:36:45.447 14844.299 - 14949.578: 97.4867% ( 15) 00:36:45.447 14949.578 - 15054.856: 97.5899% ( 14) 00:36:45.447 15054.856 - 15160.135: 97.6931% ( 14) 00:36:45.447 15160.135 - 15265.414: 97.8258% ( 18) 00:36:45.447 15265.414 - 15370.692: 97.9879% ( 22) 00:36:45.447 15370.692 - 15475.971: 98.0985% ( 15) 00:36:45.447 15475.971 - 15581.250: 98.1869% ( 12) 00:36:45.447 15581.250 - 15686.529: 98.2901% ( 14) 00:36:45.447 15686.529 - 15791.807: 98.3564% ( 9) 00:36:45.447 15791.807 - 15897.086: 98.3933% ( 5) 00:36:45.447 15897.086 - 16002.365: 98.4301% ( 5) 00:36:45.447 16002.365 - 16107.643: 98.4744% ( 6) 00:36:45.447 16107.643 - 16212.922: 98.5112% ( 5) 00:36:45.447 16212.922 - 16318.201: 98.5554% ( 6) 00:36:45.447 16318.201 - 16423.480: 98.5849% ( 4) 00:36:45.447 18634.333 - 18739.611: 98.6291% ( 6) 00:36:45.447 18739.611 - 18844.890: 98.6733% ( 6) 00:36:45.447 18844.890 - 18950.169: 98.7102% ( 5) 00:36:45.447 18950.169 - 19055.447: 98.7618% ( 7) 00:36:45.447 19055.447 - 19160.726: 98.7986% ( 5) 00:36:45.447 19160.726 - 19266.005: 98.8355% ( 5) 00:36:45.447 19266.005 - 19371.284: 98.8871% ( 7) 00:36:45.447 19371.284 - 19476.562: 98.9313% ( 6) 00:36:45.447 19476.562 - 19581.841: 98.9755% ( 6) 00:36:45.447 19581.841 - 19687.120: 99.0271% ( 7) 00:36:45.447 19687.120 - 19792.398: 99.0566% ( 4) 00:36:45.447 26424.957 - 26530.236: 99.0861% ( 4) 00:36:45.447 26530.236 - 26635.515: 99.1156% ( 4) 00:36:45.447 26635.515 - 26740.794: 99.1450% ( 4) 00:36:45.447 26740.794 - 26846.072: 99.1745% ( 4) 00:36:45.447 26846.072 - 26951.351: 99.2040% ( 4) 00:36:45.447 26951.351 - 27161.908: 99.2556% ( 7) 00:36:45.447 27161.908 - 27372.466: 99.3072% ( 7) 00:36:45.447 27372.466 - 27583.023: 99.3662% ( 8) 00:36:45.447 27583.023 - 27793.581: 99.4251% ( 8) 00:36:45.447 27793.581 - 28004.138: 99.4841% ( 8) 00:36:45.447 28004.138 - 28214.696: 99.5283% ( 6) 00:36:45.447 32846.959 - 33057.516: 99.5357% ( 1) 00:36:45.447 33057.516 - 33268.074: 99.5873% ( 7) 00:36:45.447 33268.074 - 33478.631: 99.6389% ( 7) 00:36:45.447 33478.631 - 33689.189: 99.6978% ( 8) 00:36:45.447 33689.189 - 33899.746: 99.7568% ( 8) 00:36:45.447 33899.746 - 34110.304: 99.8084% ( 7) 00:36:45.447 34110.304 - 34320.861: 99.8673% ( 8) 00:36:45.447 34320.861 - 34531.418: 99.9189% ( 7) 00:36:45.447 34531.418 - 34741.976: 99.9853% ( 9) 00:36:45.447 34741.976 - 34952.533: 100.0000% ( 2) 00:36:45.447 00:36:45.447 17:33:45 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:36:46.885 Initializing NVMe Controllers 00:36:46.885 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:46.885 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:36:46.885 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:36:46.885 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:36:46.885 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:36:46.885 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:36:46.885 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:36:46.885 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:36:46.885 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:36:46.885 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:36:46.885 Initialization complete. Launching workers. 00:36:46.885 ======================================================== 00:36:46.885 Latency(us) 00:36:46.885 Device Information : IOPS MiB/s Average min max 00:36:46.885 PCIE (0000:00:10.0) NSID 1 from core 0: 13280.11 155.63 9665.00 4937.79 42491.17 00:36:46.885 PCIE (0000:00:11.0) NSID 1 from core 0: 13280.11 155.63 9650.66 7222.84 40647.96 00:36:46.885 PCIE (0000:00:13.0) NSID 1 from core 0: 13280.11 155.63 9636.48 6952.27 39642.17 00:36:46.885 PCIE (0000:00:12.0) NSID 1 from core 0: 13280.11 155.63 9622.28 7368.65 37654.85 00:36:46.885 PCIE (0000:00:12.0) NSID 2 from core 0: 13280.11 155.63 9608.14 4168.52 35795.07 00:36:46.885 PCIE (0000:00:12.0) NSID 3 from core 0: 13343.96 156.37 9548.04 7196.26 28019.69 00:36:46.885 ======================================================== 00:36:46.885 Total : 79744.53 934.51 9621.71 4168.52 42491.17 00:36:46.885 00:36:46.885 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:46.885 ================================================================================= 00:36:46.885 1.00000% : 7474.789us 00:36:46.885 10.00000% : 8053.822us 00:36:46.885 25.00000% : 8369.658us 00:36:46.885 50.00000% : 8843.412us 00:36:46.885 75.00000% : 9580.363us 00:36:46.885 90.00000% : 11685.937us 00:36:46.885 95.00000% : 13896.790us 00:36:46.885 98.00000% : 19055.447us 00:36:46.885 99.00000% : 25372.170us 00:36:46.885 99.50000% : 34320.861us 00:36:46.885 99.90000% : 42111.486us 00:36:46.885 99.99000% : 42532.601us 00:36:46.885 99.99900% : 42532.601us 00:36:46.885 99.99990% : 42532.601us 00:36:46.885 99.99999% : 42532.601us 00:36:46.885 00:36:46.885 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:46.885 ================================================================================= 00:36:46.885 1.00000% : 7632.707us 00:36:46.885 10.00000% : 8053.822us 00:36:46.885 25.00000% : 8422.297us 00:36:46.885 50.00000% : 8843.412us 00:36:46.885 75.00000% : 9527.724us 00:36:46.885 90.00000% : 11738.577us 00:36:46.885 95.00000% : 14317.905us 00:36:46.885 98.00000% : 18844.890us 00:36:46.885 99.00000% : 26109.121us 00:36:46.885 99.50000% : 32425.844us 00:36:46.885 99.90000% : 40427.027us 00:36:46.885 99.99000% : 40637.584us 00:36:46.885 99.99900% : 40848.141us 00:36:46.885 99.99990% : 40848.141us 00:36:46.885 99.99999% : 40848.141us 00:36:46.885 00:36:46.885 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:46.885 ================================================================================= 00:36:46.885 1.00000% : 7527.428us 00:36:46.885 10.00000% : 8106.461us 00:36:46.885 25.00000% : 8422.297us 00:36:46.885 50.00000% : 8896.051us 00:36:46.885 75.00000% : 9527.724us 00:36:46.885 90.00000% : 11528.019us 00:36:46.885 95.00000% : 13580.954us 00:36:46.885 98.00000% : 19055.447us 00:36:46.886 99.00000% : 27372.466us 00:36:46.886 99.50000% : 31583.614us 00:36:46.886 99.90000% : 39374.239us 00:36:46.886 99.99000% : 39795.354us 00:36:46.886 99.99900% : 39795.354us 00:36:46.886 99.99990% : 39795.354us 00:36:46.886 99.99999% : 39795.354us 00:36:46.886 00:36:46.886 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:46.886 ================================================================================= 00:36:46.886 1.00000% : 7685.346us 00:36:46.886 10.00000% : 8053.822us 00:36:46.886 25.00000% : 8422.297us 00:36:46.886 50.00000% : 8843.412us 00:36:46.886 75.00000% : 9527.724us 00:36:46.886 90.00000% : 11422.741us 00:36:46.886 95.00000% : 13265.118us 00:36:46.886 98.00000% : 18423.775us 00:36:46.886 99.00000% : 26319.679us 00:36:46.886 99.50000% : 30109.712us 00:36:46.886 99.90000% : 37479.222us 00:36:46.886 99.99000% : 37689.780us 00:36:46.886 99.99900% : 37689.780us 00:36:46.886 99.99990% : 37689.780us 00:36:46.886 99.99999% : 37689.780us 00:36:46.886 00:36:46.886 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:46.886 ================================================================================= 00:36:46.886 1.00000% : 7580.067us 00:36:46.886 10.00000% : 8106.461us 00:36:46.886 25.00000% : 8369.658us 00:36:46.886 50.00000% : 8843.412us 00:36:46.886 75.00000% : 9475.084us 00:36:46.886 90.00000% : 11633.298us 00:36:46.886 95.00000% : 13370.397us 00:36:46.886 98.00000% : 17686.824us 00:36:46.886 99.00000% : 25898.564us 00:36:46.886 99.50000% : 28425.253us 00:36:46.886 99.90000% : 35584.206us 00:36:46.886 99.99000% : 35794.763us 00:36:46.886 99.99900% : 36005.320us 00:36:46.886 99.99990% : 36005.320us 00:36:46.886 99.99999% : 36005.320us 00:36:46.886 00:36:46.886 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:46.886 ================================================================================= 00:36:46.886 1.00000% : 7580.067us 00:36:46.886 10.00000% : 8106.461us 00:36:46.886 25.00000% : 8422.297us 00:36:46.886 50.00000% : 8843.412us 00:36:46.886 75.00000% : 9475.084us 00:36:46.886 90.00000% : 11580.659us 00:36:46.886 95.00000% : 13686.233us 00:36:46.886 98.00000% : 17897.382us 00:36:46.886 99.00000% : 24319.383us 00:36:46.886 99.50000% : 25266.892us 00:36:46.886 99.90000% : 27793.581us 00:36:46.886 99.99000% : 28004.138us 00:36:46.886 99.99900% : 28214.696us 00:36:46.886 99.99990% : 28214.696us 00:36:46.886 99.99999% : 28214.696us 00:36:46.886 00:36:46.886 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:36:46.886 ============================================================================== 00:36:46.886 Range in us Cumulative IO count 00:36:46.886 4921.780 - 4948.100: 0.0150% ( 2) 00:36:46.886 7053.674 - 7106.313: 0.0225% ( 1) 00:36:46.886 7106.313 - 7158.953: 0.1202% ( 13) 00:36:46.886 7158.953 - 7211.592: 0.1878% ( 9) 00:36:46.886 7211.592 - 7264.231: 0.2704% ( 11) 00:36:46.886 7264.231 - 7316.871: 0.3531% ( 11) 00:36:46.886 7316.871 - 7369.510: 0.5784% ( 30) 00:36:46.886 7369.510 - 7422.149: 0.7963% ( 29) 00:36:46.886 7422.149 - 7474.789: 1.0066% ( 28) 00:36:46.886 7474.789 - 7527.428: 1.4047% ( 53) 00:36:46.886 7527.428 - 7580.067: 2.0433% ( 85) 00:36:46.886 7580.067 - 7632.707: 2.5391% ( 66) 00:36:46.886 7632.707 - 7685.346: 3.2151% ( 90) 00:36:46.886 7685.346 - 7737.986: 4.0715% ( 114) 00:36:46.886 7737.986 - 7790.625: 4.8903% ( 109) 00:36:46.886 7790.625 - 7843.264: 5.8744% ( 131) 00:36:46.886 7843.264 - 7895.904: 7.1890% ( 175) 00:36:46.886 7895.904 - 7948.543: 8.3459% ( 154) 00:36:46.886 7948.543 - 8001.182: 9.8257% ( 197) 00:36:46.886 8001.182 - 8053.822: 11.5309% ( 227) 00:36:46.886 8053.822 - 8106.461: 13.5892% ( 274) 00:36:46.886 8106.461 - 8159.100: 15.3771% ( 238) 00:36:46.886 8159.100 - 8211.740: 17.5481% ( 289) 00:36:46.886 8211.740 - 8264.379: 20.2374% ( 358) 00:36:46.886 8264.379 - 8317.018: 22.6788% ( 325) 00:36:46.886 8317.018 - 8369.658: 25.7737% ( 412) 00:36:46.886 8369.658 - 8422.297: 28.7260% ( 393) 00:36:46.886 8422.297 - 8474.937: 31.5956% ( 382) 00:36:46.886 8474.937 - 8527.576: 34.7506% ( 420) 00:36:46.886 8527.576 - 8580.215: 38.0409% ( 438) 00:36:46.886 8580.215 - 8632.855: 40.5499% ( 334) 00:36:46.886 8632.855 - 8685.494: 43.1115% ( 341) 00:36:46.886 8685.494 - 8738.133: 45.5829% ( 329) 00:36:46.886 8738.133 - 8790.773: 47.9267% ( 312) 00:36:46.886 8790.773 - 8843.412: 50.3080% ( 317) 00:36:46.886 8843.412 - 8896.051: 52.1785% ( 249) 00:36:46.886 8896.051 - 8948.691: 54.5673% ( 318) 00:36:46.886 8948.691 - 9001.330: 56.7157% ( 286) 00:36:46.886 9001.330 - 9053.969: 58.9994% ( 304) 00:36:46.886 9053.969 - 9106.609: 61.1929% ( 292) 00:36:46.886 9106.609 - 9159.248: 63.2963% ( 280) 00:36:46.886 9159.248 - 9211.888: 65.4973% ( 293) 00:36:46.886 9211.888 - 9264.527: 67.2776% ( 237) 00:36:46.886 9264.527 - 9317.166: 69.0505% ( 236) 00:36:46.886 9317.166 - 9369.806: 70.7407% ( 225) 00:36:46.886 9369.806 - 9422.445: 72.4459% ( 227) 00:36:46.886 9422.445 - 9475.084: 73.8206% ( 183) 00:36:46.886 9475.084 - 9527.724: 74.9324% ( 148) 00:36:46.886 9527.724 - 9580.363: 76.3897% ( 194) 00:36:46.886 9580.363 - 9633.002: 77.5466% ( 154) 00:36:46.886 9633.002 - 9685.642: 78.4330% ( 118) 00:36:46.886 9685.642 - 9738.281: 79.4171% ( 131) 00:36:46.886 9738.281 - 9790.920: 80.5364% ( 149) 00:36:46.886 9790.920 - 9843.560: 81.4904% ( 127) 00:36:46.886 9843.560 - 9896.199: 82.3468% ( 114) 00:36:46.886 9896.199 - 9948.839: 82.9102% ( 75) 00:36:46.886 9948.839 - 10001.478: 83.4210% ( 68) 00:36:46.886 10001.478 - 10054.117: 83.7816% ( 48) 00:36:46.886 10054.117 - 10106.757: 84.0595% ( 37) 00:36:46.886 10106.757 - 10159.396: 84.4351% ( 50) 00:36:46.886 10159.396 - 10212.035: 84.7656% ( 44) 00:36:46.886 10212.035 - 10264.675: 84.9609% ( 26) 00:36:46.886 10264.675 - 10317.314: 85.1412% ( 24) 00:36:46.886 10317.314 - 10369.953: 85.2689% ( 17) 00:36:46.886 10369.953 - 10422.593: 85.3140% ( 6) 00:36:46.886 10422.593 - 10475.232: 85.3591% ( 6) 00:36:46.886 10475.232 - 10527.871: 85.3891% ( 4) 00:36:46.886 10527.871 - 10580.511: 85.4793% ( 12) 00:36:46.886 10580.511 - 10633.150: 85.7197% ( 32) 00:36:46.886 10633.150 - 10685.790: 86.0126% ( 39) 00:36:46.886 10685.790 - 10738.429: 86.2755% ( 35) 00:36:46.886 10738.429 - 10791.068: 86.3582% ( 11) 00:36:46.886 10791.068 - 10843.708: 86.4408% ( 11) 00:36:46.886 10843.708 - 10896.347: 86.5760% ( 18) 00:36:46.886 10896.347 - 10948.986: 86.7713% ( 26) 00:36:46.886 10948.986 - 11001.626: 86.9817% ( 28) 00:36:46.886 11001.626 - 11054.265: 87.2596% ( 37) 00:36:46.886 11054.265 - 11106.904: 87.7254% ( 62) 00:36:46.886 11106.904 - 11159.544: 87.9582% ( 31) 00:36:46.886 11159.544 - 11212.183: 88.1686% ( 28) 00:36:46.886 11212.183 - 11264.822: 88.4540% ( 38) 00:36:46.886 11264.822 - 11317.462: 88.5968% ( 19) 00:36:46.886 11317.462 - 11370.101: 88.7395% ( 19) 00:36:46.886 11370.101 - 11422.741: 88.9573% ( 29) 00:36:46.886 11422.741 - 11475.380: 89.3029% ( 46) 00:36:46.886 11475.380 - 11528.019: 89.6409% ( 45) 00:36:46.886 11528.019 - 11580.659: 89.7987% ( 21) 00:36:46.886 11580.659 - 11633.298: 89.9715% ( 23) 00:36:46.886 11633.298 - 11685.937: 90.1292% ( 21) 00:36:46.886 11685.937 - 11738.577: 90.3095% ( 24) 00:36:46.886 11738.577 - 11791.216: 90.5799% ( 36) 00:36:46.886 11791.216 - 11843.855: 90.8053% ( 30) 00:36:46.886 11843.855 - 11896.495: 90.9856% ( 24) 00:36:46.886 11896.495 - 11949.134: 91.1734% ( 25) 00:36:46.886 11949.134 - 12001.773: 91.2335% ( 8) 00:36:46.886 12001.773 - 12054.413: 91.3762% ( 19) 00:36:46.886 12054.413 - 12107.052: 91.4889% ( 15) 00:36:46.886 12107.052 - 12159.692: 91.7142% ( 30) 00:36:46.886 12159.692 - 12212.331: 92.0072% ( 39) 00:36:46.886 12212.331 - 12264.970: 92.1875% ( 24) 00:36:46.886 12264.970 - 12317.610: 92.3077% ( 16) 00:36:46.886 12317.610 - 12370.249: 92.4053% ( 13) 00:36:46.886 12370.249 - 12422.888: 92.5180% ( 15) 00:36:46.886 12422.888 - 12475.528: 92.6457% ( 17) 00:36:46.886 12475.528 - 12528.167: 92.7809% ( 18) 00:36:46.886 12528.167 - 12580.806: 92.9087% ( 17) 00:36:46.886 12580.806 - 12633.446: 93.0439% ( 18) 00:36:46.886 12633.446 - 12686.085: 93.1791% ( 18) 00:36:46.886 12686.085 - 12738.724: 93.2542% ( 10) 00:36:46.886 12738.724 - 12791.364: 93.3444% ( 12) 00:36:46.886 12791.364 - 12844.003: 93.3969% ( 7) 00:36:46.886 12844.003 - 12896.643: 93.4570% ( 8) 00:36:46.886 12896.643 - 12949.282: 93.5397% ( 11) 00:36:46.886 12949.282 - 13001.921: 93.6073% ( 9) 00:36:46.886 13001.921 - 13054.561: 93.6523% ( 6) 00:36:46.886 13054.561 - 13107.200: 93.7425% ( 12) 00:36:46.886 13107.200 - 13159.839: 93.8101% ( 9) 00:36:46.886 13159.839 - 13212.479: 93.8552% ( 6) 00:36:46.886 13212.479 - 13265.118: 93.9378% ( 11) 00:36:46.886 13265.118 - 13317.757: 93.9979% ( 8) 00:36:46.886 13317.757 - 13370.397: 94.0730% ( 10) 00:36:46.886 13370.397 - 13423.036: 94.1181% ( 6) 00:36:46.886 13423.036 - 13475.676: 94.1707% ( 7) 00:36:46.886 13475.676 - 13580.954: 94.4035% ( 31) 00:36:46.886 13580.954 - 13686.233: 94.6289% ( 30) 00:36:46.886 13686.233 - 13791.512: 94.8618% ( 31) 00:36:46.886 13791.512 - 13896.790: 95.0346% ( 23) 00:36:46.886 13896.790 - 14002.069: 95.2900% ( 34) 00:36:46.886 14002.069 - 14107.348: 95.4026% ( 15) 00:36:46.886 14107.348 - 14212.627: 95.4552% ( 7) 00:36:46.886 14212.627 - 14317.905: 95.5679% ( 15) 00:36:46.886 14317.905 - 14423.184: 95.6505% ( 11) 00:36:46.886 14423.184 - 14528.463: 95.7632% ( 15) 00:36:46.886 14528.463 - 14633.741: 95.8158% ( 7) 00:36:46.886 14633.741 - 14739.020: 95.8308% ( 2) 00:36:46.887 14739.020 - 14844.299: 95.8684% ( 5) 00:36:46.887 14844.299 - 14949.578: 95.9135% ( 6) 00:36:46.887 14949.578 - 15054.856: 95.9886% ( 10) 00:36:46.887 15054.856 - 15160.135: 96.1163% ( 17) 00:36:46.887 15160.135 - 15265.414: 96.2740% ( 21) 00:36:46.887 15265.414 - 15370.692: 96.4168% ( 19) 00:36:46.887 15370.692 - 15475.971: 96.5445% ( 17) 00:36:46.887 15475.971 - 15581.250: 96.6121% ( 9) 00:36:46.887 15581.250 - 15686.529: 96.6947% ( 11) 00:36:46.887 15686.529 - 15791.807: 96.7623% ( 9) 00:36:46.887 15791.807 - 15897.086: 96.8450% ( 11) 00:36:46.887 15897.086 - 16002.365: 96.8900% ( 6) 00:36:46.887 16002.365 - 16107.643: 96.9351% ( 6) 00:36:46.887 16107.643 - 16212.922: 96.9877% ( 7) 00:36:46.887 16212.922 - 16318.201: 97.0102% ( 3) 00:36:46.887 16318.201 - 16423.480: 97.0252% ( 2) 00:36:46.887 16423.480 - 16528.758: 97.0553% ( 4) 00:36:46.887 16528.758 - 16634.037: 97.0703% ( 2) 00:36:46.887 16634.037 - 16739.316: 97.0778% ( 1) 00:36:46.887 16739.316 - 16844.594: 97.1004% ( 3) 00:36:46.887 16844.594 - 16949.873: 97.1154% ( 2) 00:36:46.887 17370.988 - 17476.267: 97.1229% ( 1) 00:36:46.887 17686.824 - 17792.103: 97.2431% ( 16) 00:36:46.887 17792.103 - 17897.382: 97.4084% ( 22) 00:36:46.887 17897.382 - 18002.660: 97.4684% ( 8) 00:36:46.887 18002.660 - 18107.939: 97.5886% ( 16) 00:36:46.887 18107.939 - 18213.218: 97.6487% ( 8) 00:36:46.887 18213.218 - 18318.496: 97.7389% ( 12) 00:36:46.887 18318.496 - 18423.775: 97.7840% ( 6) 00:36:46.887 18423.775 - 18529.054: 97.8140% ( 4) 00:36:46.887 18529.054 - 18634.333: 97.8441% ( 4) 00:36:46.887 18634.333 - 18739.611: 97.8666% ( 3) 00:36:46.887 18739.611 - 18844.890: 97.9117% ( 6) 00:36:46.887 18844.890 - 18950.169: 97.9417% ( 4) 00:36:46.887 18950.169 - 19055.447: 98.0619% ( 16) 00:36:46.887 19055.447 - 19160.726: 98.1220% ( 8) 00:36:46.887 19160.726 - 19266.005: 98.1746% ( 7) 00:36:46.887 19266.005 - 19371.284: 98.2572% ( 11) 00:36:46.887 19371.284 - 19476.562: 98.3323% ( 10) 00:36:46.887 19476.562 - 19581.841: 98.4375% ( 14) 00:36:46.887 19581.841 - 19687.120: 98.5201% ( 11) 00:36:46.887 19687.120 - 19792.398: 98.5577% ( 5) 00:36:46.887 24214.104 - 24319.383: 98.5652% ( 1) 00:36:46.887 24319.383 - 24424.662: 98.5727% ( 1) 00:36:46.887 24529.941 - 24635.219: 98.6403% ( 9) 00:36:46.887 24635.219 - 24740.498: 98.7079% ( 9) 00:36:46.887 24740.498 - 24845.777: 98.7530% ( 6) 00:36:46.887 24845.777 - 24951.055: 98.7831% ( 4) 00:36:46.887 24951.055 - 25056.334: 98.8356% ( 7) 00:36:46.887 25056.334 - 25161.613: 98.9108% ( 10) 00:36:46.887 25161.613 - 25266.892: 98.9784% ( 9) 00:36:46.887 25266.892 - 25372.170: 99.0084% ( 4) 00:36:46.887 25372.170 - 25477.449: 99.0385% ( 4) 00:36:46.887 32425.844 - 32636.402: 99.0610% ( 3) 00:36:46.887 32636.402 - 32846.959: 99.1436% ( 11) 00:36:46.887 32846.959 - 33057.516: 99.2112% ( 9) 00:36:46.887 33057.516 - 33268.074: 99.2788% ( 9) 00:36:46.887 33268.074 - 33478.631: 99.3239% ( 6) 00:36:46.887 33478.631 - 33689.189: 99.3690% ( 6) 00:36:46.887 33689.189 - 33899.746: 99.4066% ( 5) 00:36:46.887 33899.746 - 34110.304: 99.4591% ( 7) 00:36:46.887 34110.304 - 34320.861: 99.5192% ( 8) 00:36:46.887 40427.027 - 40637.584: 99.5343% ( 2) 00:36:46.887 40637.584 - 40848.141: 99.5944% ( 8) 00:36:46.887 40848.141 - 41058.699: 99.6469% ( 7) 00:36:46.887 41058.699 - 41269.256: 99.6995% ( 7) 00:36:46.887 41269.256 - 41479.814: 99.7521% ( 7) 00:36:46.887 41479.814 - 41690.371: 99.8047% ( 7) 00:36:46.887 41690.371 - 41900.929: 99.8573% ( 7) 00:36:46.887 41900.929 - 42111.486: 99.9099% ( 7) 00:36:46.887 42111.486 - 42322.043: 99.9624% ( 7) 00:36:46.887 42322.043 - 42532.601: 100.0000% ( 5) 00:36:46.887 00:36:46.887 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:36:46.887 ============================================================================== 00:36:46.887 Range in us Cumulative IO count 00:36:46.887 7211.592 - 7264.231: 0.0150% ( 2) 00:36:46.887 7316.871 - 7369.510: 0.0225% ( 1) 00:36:46.887 7369.510 - 7422.149: 0.0901% ( 9) 00:36:46.887 7422.149 - 7474.789: 0.2028% ( 15) 00:36:46.887 7474.789 - 7527.428: 0.4282% ( 30) 00:36:46.887 7527.428 - 7580.067: 0.9014% ( 63) 00:36:46.887 7580.067 - 7632.707: 1.5475% ( 86) 00:36:46.887 7632.707 - 7685.346: 2.6593% ( 148) 00:36:46.887 7685.346 - 7737.986: 3.4255% ( 102) 00:36:46.887 7737.986 - 7790.625: 4.3344% ( 121) 00:36:46.887 7790.625 - 7843.264: 5.8218% ( 198) 00:36:46.887 7843.264 - 7895.904: 6.8510% ( 137) 00:36:46.887 7895.904 - 7948.543: 7.6547% ( 107) 00:36:46.887 7948.543 - 8001.182: 8.7891% ( 151) 00:36:46.887 8001.182 - 8053.822: 10.4117% ( 216) 00:36:46.887 8053.822 - 8106.461: 12.1920% ( 237) 00:36:46.887 8106.461 - 8159.100: 14.1376% ( 259) 00:36:46.887 8159.100 - 8211.740: 16.7293% ( 345) 00:36:46.887 8211.740 - 8264.379: 19.9669% ( 431) 00:36:46.887 8264.379 - 8317.018: 22.5736% ( 347) 00:36:46.887 8317.018 - 8369.658: 24.8272% ( 300) 00:36:46.887 8369.658 - 8422.297: 28.4630% ( 484) 00:36:46.887 8422.297 - 8474.937: 31.9712% ( 467) 00:36:46.887 8474.937 - 8527.576: 34.8182% ( 379) 00:36:46.887 8527.576 - 8580.215: 38.1686% ( 446) 00:36:46.887 8580.215 - 8632.855: 41.0306% ( 381) 00:36:46.887 8632.855 - 8685.494: 43.7876% ( 367) 00:36:46.887 8685.494 - 8738.133: 47.1529% ( 448) 00:36:46.887 8738.133 - 8790.773: 49.7296% ( 343) 00:36:46.887 8790.773 - 8843.412: 51.9531% ( 296) 00:36:46.887 8843.412 - 8896.051: 54.2368% ( 304) 00:36:46.887 8896.051 - 8948.691: 56.2275% ( 265) 00:36:46.887 8948.691 - 9001.330: 58.0754% ( 246) 00:36:46.887 9001.330 - 9053.969: 59.7731% ( 226) 00:36:46.887 9053.969 - 9106.609: 61.4333% ( 221) 00:36:46.887 9106.609 - 9159.248: 63.4991% ( 275) 00:36:46.887 9159.248 - 9211.888: 65.2644% ( 235) 00:36:46.887 9211.888 - 9264.527: 66.8269% ( 208) 00:36:46.887 9264.527 - 9317.166: 68.3293% ( 200) 00:36:46.887 9317.166 - 9369.806: 69.8618% ( 204) 00:36:46.887 9369.806 - 9422.445: 71.6722% ( 241) 00:36:46.887 9422.445 - 9475.084: 73.3173% ( 219) 00:36:46.887 9475.084 - 9527.724: 75.3080% ( 265) 00:36:46.887 9527.724 - 9580.363: 76.9456% ( 218) 00:36:46.887 9580.363 - 9633.002: 78.1250% ( 157) 00:36:46.887 9633.002 - 9685.642: 79.1842% ( 141) 00:36:46.887 9685.642 - 9738.281: 80.1983% ( 135) 00:36:46.887 9738.281 - 9790.920: 81.1073% ( 121) 00:36:46.887 9790.920 - 9843.560: 81.9787% ( 116) 00:36:46.887 9843.560 - 9896.199: 82.5045% ( 70) 00:36:46.887 9896.199 - 9948.839: 82.9102% ( 54) 00:36:46.887 9948.839 - 10001.478: 83.3083% ( 53) 00:36:46.887 10001.478 - 10054.117: 83.7891% ( 64) 00:36:46.887 10054.117 - 10106.757: 84.0294% ( 32) 00:36:46.887 10106.757 - 10159.396: 84.2323% ( 27) 00:36:46.887 10159.396 - 10212.035: 84.3975% ( 22) 00:36:46.887 10212.035 - 10264.675: 84.5928% ( 26) 00:36:46.887 10264.675 - 10317.314: 84.7130% ( 16) 00:36:46.887 10317.314 - 10369.953: 84.7882% ( 10) 00:36:46.887 10369.953 - 10422.593: 84.9084% ( 16) 00:36:46.887 10422.593 - 10475.232: 85.1187% ( 28) 00:36:46.887 10475.232 - 10527.871: 85.3816% ( 35) 00:36:46.887 10527.871 - 10580.511: 85.7422% ( 48) 00:36:46.887 10580.511 - 10633.150: 86.0201% ( 37) 00:36:46.887 10633.150 - 10685.790: 86.2831% ( 35) 00:36:46.887 10685.790 - 10738.429: 86.5385% ( 34) 00:36:46.887 10738.429 - 10791.068: 86.8615% ( 43) 00:36:46.887 10791.068 - 10843.708: 87.1244% ( 35) 00:36:46.887 10843.708 - 10896.347: 87.3873% ( 35) 00:36:46.887 10896.347 - 10948.986: 87.6953% ( 41) 00:36:46.887 10948.986 - 11001.626: 87.9357% ( 32) 00:36:46.887 11001.626 - 11054.265: 88.1160% ( 24) 00:36:46.887 11054.265 - 11106.904: 88.2212% ( 14) 00:36:46.887 11106.904 - 11159.544: 88.3564% ( 18) 00:36:46.887 11159.544 - 11212.183: 88.4691% ( 15) 00:36:46.887 11212.183 - 11264.822: 88.5968% ( 17) 00:36:46.887 11264.822 - 11317.462: 88.7470% ( 20) 00:36:46.887 11317.462 - 11370.101: 88.9648% ( 29) 00:36:46.887 11370.101 - 11422.741: 89.2428% ( 37) 00:36:46.887 11422.741 - 11475.380: 89.3930% ( 20) 00:36:46.887 11475.380 - 11528.019: 89.5508% ( 21) 00:36:46.887 11528.019 - 11580.659: 89.7010% ( 20) 00:36:46.887 11580.659 - 11633.298: 89.8438% ( 19) 00:36:46.887 11633.298 - 11685.937: 89.9865% ( 19) 00:36:46.887 11685.937 - 11738.577: 90.1292% ( 19) 00:36:46.887 11738.577 - 11791.216: 90.4447% ( 42) 00:36:46.887 11791.216 - 11843.855: 90.7978% ( 47) 00:36:46.887 11843.855 - 11896.495: 90.9630% ( 22) 00:36:46.887 11896.495 - 11949.134: 91.0907% ( 17) 00:36:46.887 11949.134 - 12001.773: 91.1508% ( 8) 00:36:46.887 12001.773 - 12054.413: 91.3311% ( 24) 00:36:46.887 12054.413 - 12107.052: 91.4964% ( 22) 00:36:46.887 12107.052 - 12159.692: 91.7067% ( 28) 00:36:46.887 12159.692 - 12212.331: 91.9396% ( 31) 00:36:46.887 12212.331 - 12264.970: 92.0898% ( 20) 00:36:46.887 12264.970 - 12317.610: 92.2551% ( 22) 00:36:46.887 12317.610 - 12370.249: 92.4129% ( 21) 00:36:46.887 12370.249 - 12422.888: 92.5931% ( 24) 00:36:46.887 12422.888 - 12475.528: 92.6983% ( 14) 00:36:46.887 12475.528 - 12528.167: 92.7809% ( 11) 00:36:46.887 12528.167 - 12580.806: 92.8786% ( 13) 00:36:46.887 12580.806 - 12633.446: 92.9688% ( 12) 00:36:46.887 12633.446 - 12686.085: 93.0664% ( 13) 00:36:46.887 12686.085 - 12738.724: 93.2091% ( 19) 00:36:46.887 12738.724 - 12791.364: 93.3368% ( 17) 00:36:46.887 12791.364 - 12844.003: 93.4796% ( 19) 00:36:46.887 12844.003 - 12896.643: 93.6373% ( 21) 00:36:46.887 12896.643 - 12949.282: 93.7575% ( 16) 00:36:46.887 12949.282 - 13001.921: 93.9153% ( 21) 00:36:46.887 13001.921 - 13054.561: 93.9979% ( 11) 00:36:46.887 13054.561 - 13107.200: 94.0880% ( 12) 00:36:46.888 13107.200 - 13159.839: 94.1632% ( 10) 00:36:46.888 13159.839 - 13212.479: 94.2608% ( 13) 00:36:46.888 13212.479 - 13265.118: 94.3284% ( 9) 00:36:46.888 13265.118 - 13317.757: 94.4111% ( 11) 00:36:46.888 13317.757 - 13370.397: 94.5237% ( 15) 00:36:46.888 13370.397 - 13423.036: 94.5913% ( 9) 00:36:46.888 13423.036 - 13475.676: 94.6064% ( 2) 00:36:46.888 13475.676 - 13580.954: 94.6289% ( 3) 00:36:46.888 13580.954 - 13686.233: 94.6514% ( 3) 00:36:46.888 13686.233 - 13791.512: 94.6665% ( 2) 00:36:46.888 13791.512 - 13896.790: 94.6965% ( 4) 00:36:46.888 13896.790 - 14002.069: 94.7491% ( 7) 00:36:46.888 14002.069 - 14107.348: 94.8242% ( 10) 00:36:46.888 14107.348 - 14212.627: 94.9444% ( 16) 00:36:46.888 14212.627 - 14317.905: 95.0646% ( 16) 00:36:46.888 14317.905 - 14423.184: 95.2148% ( 20) 00:36:46.888 14423.184 - 14528.463: 95.3726% ( 21) 00:36:46.888 14528.463 - 14633.741: 95.5829% ( 28) 00:36:46.888 14633.741 - 14739.020: 95.7557% ( 23) 00:36:46.888 14739.020 - 14844.299: 96.1839% ( 57) 00:36:46.888 14844.299 - 14949.578: 96.3116% ( 17) 00:36:46.888 14949.578 - 15054.856: 96.4393% ( 17) 00:36:46.888 15054.856 - 15160.135: 96.5670% ( 17) 00:36:46.888 15160.135 - 15265.414: 96.6572% ( 12) 00:36:46.888 15265.414 - 15370.692: 96.8525% ( 26) 00:36:46.888 15370.692 - 15475.971: 97.0252% ( 23) 00:36:46.888 15475.971 - 15581.250: 97.0478% ( 3) 00:36:46.888 15581.250 - 15686.529: 97.0703% ( 3) 00:36:46.888 15686.529 - 15791.807: 97.0853% ( 2) 00:36:46.888 15791.807 - 15897.086: 97.1079% ( 3) 00:36:46.888 15897.086 - 16002.365: 97.1154% ( 1) 00:36:46.888 17581.545 - 17686.824: 97.1379% ( 3) 00:36:46.888 17686.824 - 17792.103: 97.1905% ( 7) 00:36:46.888 17792.103 - 17897.382: 97.2281% ( 5) 00:36:46.888 17897.382 - 18002.660: 97.3783% ( 20) 00:36:46.888 18002.660 - 18107.939: 97.4384% ( 8) 00:36:46.888 18107.939 - 18213.218: 97.5210% ( 11) 00:36:46.888 18213.218 - 18318.496: 97.6037% ( 11) 00:36:46.888 18318.496 - 18423.775: 97.6863% ( 11) 00:36:46.888 18423.775 - 18529.054: 97.7764% ( 12) 00:36:46.888 18529.054 - 18634.333: 97.8816% ( 14) 00:36:46.888 18634.333 - 18739.611: 97.9718% ( 12) 00:36:46.888 18739.611 - 18844.890: 98.0243% ( 7) 00:36:46.888 18844.890 - 18950.169: 98.0619% ( 5) 00:36:46.888 18950.169 - 19055.447: 98.0769% ( 2) 00:36:46.888 19687.120 - 19792.398: 98.0844% ( 1) 00:36:46.888 19897.677 - 20002.956: 98.0995% ( 2) 00:36:46.888 20002.956 - 20108.235: 98.1821% ( 11) 00:36:46.888 20108.235 - 20213.513: 98.3248% ( 19) 00:36:46.888 20213.513 - 20318.792: 98.4675% ( 19) 00:36:46.888 20318.792 - 20424.071: 98.5577% ( 12) 00:36:46.888 24951.055 - 25056.334: 98.5953% ( 5) 00:36:46.888 25056.334 - 25161.613: 98.6328% ( 5) 00:36:46.888 25161.613 - 25266.892: 98.6704% ( 5) 00:36:46.888 25266.892 - 25372.170: 98.7079% ( 5) 00:36:46.888 25372.170 - 25477.449: 98.7530% ( 6) 00:36:46.888 25477.449 - 25582.728: 98.7981% ( 6) 00:36:46.888 25582.728 - 25688.006: 98.8431% ( 6) 00:36:46.888 25688.006 - 25793.285: 98.8807% ( 5) 00:36:46.888 25793.285 - 25898.564: 98.9258% ( 6) 00:36:46.888 25898.564 - 26003.843: 98.9709% ( 6) 00:36:46.888 26003.843 - 26109.121: 99.0159% ( 6) 00:36:46.888 26109.121 - 26214.400: 99.0385% ( 3) 00:36:46.888 30530.827 - 30741.385: 99.0610% ( 3) 00:36:46.888 30741.385 - 30951.942: 99.1211% ( 8) 00:36:46.888 30951.942 - 31162.500: 99.1812% ( 8) 00:36:46.888 31162.500 - 31373.057: 99.2338% ( 7) 00:36:46.888 31373.057 - 31583.614: 99.2939% ( 8) 00:36:46.888 31583.614 - 31794.172: 99.3465% ( 7) 00:36:46.888 31794.172 - 32004.729: 99.3990% ( 7) 00:36:46.888 32004.729 - 32215.287: 99.4516% ( 7) 00:36:46.888 32215.287 - 32425.844: 99.5117% ( 8) 00:36:46.888 32425.844 - 32636.402: 99.5192% ( 1) 00:36:46.888 38742.567 - 38953.124: 99.5418% ( 3) 00:36:46.888 38953.124 - 39163.682: 99.6019% ( 8) 00:36:46.888 39163.682 - 39374.239: 99.6544% ( 7) 00:36:46.888 39374.239 - 39584.797: 99.7145% ( 8) 00:36:46.888 39584.797 - 39795.354: 99.7746% ( 8) 00:36:46.888 39795.354 - 40005.912: 99.8272% ( 7) 00:36:46.888 40005.912 - 40216.469: 99.8798% ( 7) 00:36:46.888 40216.469 - 40427.027: 99.9324% ( 7) 00:36:46.888 40427.027 - 40637.584: 99.9925% ( 8) 00:36:46.888 40637.584 - 40848.141: 100.0000% ( 1) 00:36:46.888 00:36:46.888 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:36:46.888 ============================================================================== 00:36:46.888 Range in us Cumulative IO count 00:36:46.888 6948.395 - 7001.035: 0.0376% ( 5) 00:36:46.888 7001.035 - 7053.674: 0.1052% ( 9) 00:36:46.888 7053.674 - 7106.313: 0.1578% ( 7) 00:36:46.888 7106.313 - 7158.953: 0.2779% ( 16) 00:36:46.888 7158.953 - 7211.592: 0.3531% ( 10) 00:36:46.888 7211.592 - 7264.231: 0.3981% ( 6) 00:36:46.888 7264.231 - 7316.871: 0.4507% ( 7) 00:36:46.888 7316.871 - 7369.510: 0.5334% ( 11) 00:36:46.888 7369.510 - 7422.149: 0.6010% ( 9) 00:36:46.888 7422.149 - 7474.789: 0.8038% ( 27) 00:36:46.888 7474.789 - 7527.428: 1.1493% ( 46) 00:36:46.888 7527.428 - 7580.067: 1.9081% ( 101) 00:36:46.888 7580.067 - 7632.707: 2.7644% ( 114) 00:36:46.888 7632.707 - 7685.346: 3.4255% ( 88) 00:36:46.888 7685.346 - 7737.986: 4.0790% ( 87) 00:36:46.888 7737.986 - 7790.625: 4.7476% ( 89) 00:36:46.888 7790.625 - 7843.264: 5.1758% ( 57) 00:36:46.888 7843.264 - 7895.904: 5.6265% ( 60) 00:36:46.888 7895.904 - 7948.543: 6.1674% ( 72) 00:36:46.888 7948.543 - 8001.182: 7.3167% ( 153) 00:36:46.888 8001.182 - 8053.822: 8.4210% ( 147) 00:36:46.888 8053.822 - 8106.461: 10.1262% ( 227) 00:36:46.888 8106.461 - 8159.100: 12.3573% ( 297) 00:36:46.888 8159.100 - 8211.740: 14.5358% ( 290) 00:36:46.888 8211.740 - 8264.379: 17.4730% ( 391) 00:36:46.888 8264.379 - 8317.018: 20.5754% ( 413) 00:36:46.888 8317.018 - 8369.658: 23.9108% ( 444) 00:36:46.888 8369.658 - 8422.297: 27.2536% ( 445) 00:36:46.888 8422.297 - 8474.937: 31.0622% ( 507) 00:36:46.888 8474.937 - 8527.576: 34.2097% ( 419) 00:36:46.888 8527.576 - 8580.215: 36.8089% ( 346) 00:36:46.888 8580.215 - 8632.855: 39.3404% ( 337) 00:36:46.888 8632.855 - 8685.494: 41.8870% ( 339) 00:36:46.888 8685.494 - 8738.133: 44.2608% ( 316) 00:36:46.888 8738.133 - 8790.773: 46.7623% ( 333) 00:36:46.888 8790.773 - 8843.412: 49.5267% ( 368) 00:36:46.888 8843.412 - 8896.051: 52.5541% ( 403) 00:36:46.888 8896.051 - 8948.691: 54.9129% ( 314) 00:36:46.888 8948.691 - 9001.330: 57.0312% ( 282) 00:36:46.888 9001.330 - 9053.969: 58.9543% ( 256) 00:36:46.888 9053.969 - 9106.609: 61.2906% ( 311) 00:36:46.888 9106.609 - 9159.248: 63.4691% ( 290) 00:36:46.888 9159.248 - 9211.888: 65.1668% ( 226) 00:36:46.888 9211.888 - 9264.527: 67.1049% ( 258) 00:36:46.888 9264.527 - 9317.166: 68.7575% ( 220) 00:36:46.888 9317.166 - 9369.806: 70.5829% ( 243) 00:36:46.888 9369.806 - 9422.445: 72.2281% ( 219) 00:36:46.888 9422.445 - 9475.084: 73.9183% ( 225) 00:36:46.888 9475.084 - 9527.724: 75.3606% ( 192) 00:36:46.888 9527.724 - 9580.363: 76.8029% ( 192) 00:36:46.888 9580.363 - 9633.002: 78.1626% ( 181) 00:36:46.888 9633.002 - 9685.642: 79.1091% ( 126) 00:36:46.888 9685.642 - 9738.281: 80.1908% ( 144) 00:36:46.888 9738.281 - 9790.920: 81.0847% ( 119) 00:36:46.888 9790.920 - 9843.560: 81.9862% ( 120) 00:36:46.888 9843.560 - 9896.199: 82.6172% ( 84) 00:36:46.888 9896.199 - 9948.839: 83.1505% ( 71) 00:36:46.888 9948.839 - 10001.478: 83.7590% ( 81) 00:36:46.888 10001.478 - 10054.117: 84.2773% ( 69) 00:36:46.888 10054.117 - 10106.757: 85.0511% ( 103) 00:36:46.888 10106.757 - 10159.396: 85.6070% ( 74) 00:36:46.888 10159.396 - 10212.035: 85.9675% ( 48) 00:36:46.888 10212.035 - 10264.675: 86.2230% ( 34) 00:36:46.888 10264.675 - 10317.314: 86.3807% ( 21) 00:36:46.888 10317.314 - 10369.953: 86.5385% ( 21) 00:36:46.888 10369.953 - 10422.593: 86.7413% ( 27) 00:36:46.888 10422.593 - 10475.232: 86.8389% ( 13) 00:36:46.888 10475.232 - 10527.871: 87.0117% ( 23) 00:36:46.888 10527.871 - 10580.511: 87.0793% ( 9) 00:36:46.888 10580.511 - 10633.150: 87.1544% ( 10) 00:36:46.888 10633.150 - 10685.790: 87.2897% ( 18) 00:36:46.888 10685.790 - 10738.429: 87.4624% ( 23) 00:36:46.888 10738.429 - 10791.068: 87.7479% ( 38) 00:36:46.888 10791.068 - 10843.708: 88.0183% ( 36) 00:36:46.888 10843.708 - 10896.347: 88.2888% ( 36) 00:36:46.888 10896.347 - 10948.986: 88.5968% ( 41) 00:36:46.888 10948.986 - 11001.626: 88.8522% ( 34) 00:36:46.888 11001.626 - 11054.265: 89.0024% ( 20) 00:36:46.888 11054.265 - 11106.904: 89.1001% ( 13) 00:36:46.888 11106.904 - 11159.544: 89.1902% ( 12) 00:36:46.888 11159.544 - 11212.183: 89.3104% ( 16) 00:36:46.888 11212.183 - 11264.822: 89.3780% ( 9) 00:36:46.888 11264.822 - 11317.462: 89.4681% ( 12) 00:36:46.888 11317.462 - 11370.101: 89.5733% ( 14) 00:36:46.888 11370.101 - 11422.741: 89.7761% ( 27) 00:36:46.888 11422.741 - 11475.380: 89.9264% ( 20) 00:36:46.888 11475.380 - 11528.019: 90.2269% ( 40) 00:36:46.888 11528.019 - 11580.659: 90.3921% ( 22) 00:36:46.888 11580.659 - 11633.298: 90.5424% ( 20) 00:36:46.888 11633.298 - 11685.937: 90.6926% ( 20) 00:36:46.888 11685.937 - 11738.577: 90.9029% ( 28) 00:36:46.888 11738.577 - 11791.216: 91.1734% ( 36) 00:36:46.888 11791.216 - 11843.855: 91.3386% ( 22) 00:36:46.888 11843.855 - 11896.495: 91.5340% ( 26) 00:36:46.888 11896.495 - 11949.134: 91.6992% ( 22) 00:36:46.888 11949.134 - 12001.773: 91.8269% ( 17) 00:36:46.888 12001.773 - 12054.413: 91.9396% ( 15) 00:36:46.888 12054.413 - 12107.052: 92.0448% ( 14) 00:36:46.888 12107.052 - 12159.692: 92.1575% ( 15) 00:36:46.888 12159.692 - 12212.331: 92.2776% ( 16) 00:36:46.889 12212.331 - 12264.970: 92.4204% ( 19) 00:36:46.889 12264.970 - 12317.610: 92.6382% ( 29) 00:36:46.889 12317.610 - 12370.249: 92.9237% ( 38) 00:36:46.889 12370.249 - 12422.888: 93.0965% ( 23) 00:36:46.889 12422.888 - 12475.528: 93.2692% ( 23) 00:36:46.889 12475.528 - 12528.167: 93.4570% ( 25) 00:36:46.889 12528.167 - 12580.806: 93.6223% ( 22) 00:36:46.889 12580.806 - 12633.446: 93.7575% ( 18) 00:36:46.889 12633.446 - 12686.085: 93.8927% ( 18) 00:36:46.889 12686.085 - 12738.724: 94.1181% ( 30) 00:36:46.889 12738.724 - 12791.364: 94.3434% ( 30) 00:36:46.889 12791.364 - 12844.003: 94.4261% ( 11) 00:36:46.889 12844.003 - 12896.643: 94.4787% ( 7) 00:36:46.889 12896.643 - 12949.282: 94.5312% ( 7) 00:36:46.889 12949.282 - 13001.921: 94.5913% ( 8) 00:36:46.889 13001.921 - 13054.561: 94.6289% ( 5) 00:36:46.889 13054.561 - 13107.200: 94.6665% ( 5) 00:36:46.889 13107.200 - 13159.839: 94.6890% ( 3) 00:36:46.889 13159.839 - 13212.479: 94.7341% ( 6) 00:36:46.889 13212.479 - 13265.118: 94.7716% ( 5) 00:36:46.889 13265.118 - 13317.757: 94.7942% ( 3) 00:36:46.889 13317.757 - 13370.397: 94.8167% ( 3) 00:36:46.889 13370.397 - 13423.036: 94.8468% ( 4) 00:36:46.889 13423.036 - 13475.676: 94.8918% ( 6) 00:36:46.889 13475.676 - 13580.954: 95.0195% ( 17) 00:36:46.889 13580.954 - 13686.233: 95.2749% ( 34) 00:36:46.889 13686.233 - 13791.512: 95.3651% ( 12) 00:36:46.889 13791.512 - 13896.790: 95.4177% ( 7) 00:36:46.889 13896.790 - 14002.069: 95.4627% ( 6) 00:36:46.889 14002.069 - 14107.348: 95.5078% ( 6) 00:36:46.889 14107.348 - 14212.627: 95.5980% ( 12) 00:36:46.889 14212.627 - 14317.905: 95.6956% ( 13) 00:36:46.889 14317.905 - 14423.184: 95.7707% ( 10) 00:36:46.889 14423.184 - 14528.463: 95.8158% ( 6) 00:36:46.889 14528.463 - 14633.741: 95.8909% ( 10) 00:36:46.889 14633.741 - 14739.020: 95.9811% ( 12) 00:36:46.889 14739.020 - 14844.299: 96.0186% ( 5) 00:36:46.889 14844.299 - 14949.578: 96.0562% ( 5) 00:36:46.889 14949.578 - 15054.856: 96.0938% ( 5) 00:36:46.889 15054.856 - 15160.135: 96.2139% ( 16) 00:36:46.889 15160.135 - 15265.414: 96.4468% ( 31) 00:36:46.889 15265.414 - 15370.692: 96.5294% ( 11) 00:36:46.889 15370.692 - 15475.971: 96.5445% ( 2) 00:36:46.889 15475.971 - 15581.250: 96.5745% ( 4) 00:36:46.889 15581.250 - 15686.529: 96.5895% ( 2) 00:36:46.889 15686.529 - 15791.807: 96.6121% ( 3) 00:36:46.889 15791.807 - 15897.086: 96.6271% ( 2) 00:36:46.889 15897.086 - 16002.365: 96.6346% ( 1) 00:36:46.889 16002.365 - 16107.643: 96.6647% ( 4) 00:36:46.889 16107.643 - 16212.922: 96.7248% ( 8) 00:36:46.889 16212.922 - 16318.201: 96.9952% ( 36) 00:36:46.889 16318.201 - 16423.480: 97.0778% ( 11) 00:36:46.889 16423.480 - 16528.758: 97.1154% ( 5) 00:36:46.889 16949.873 - 17055.152: 97.1304% ( 2) 00:36:46.889 17055.152 - 17160.431: 97.1454% ( 2) 00:36:46.889 17160.431 - 17265.709: 97.1830% ( 5) 00:36:46.889 17265.709 - 17370.988: 97.2656% ( 11) 00:36:46.889 17370.988 - 17476.267: 97.3332% ( 9) 00:36:46.889 17476.267 - 17581.545: 97.3483% ( 2) 00:36:46.889 17581.545 - 17686.824: 97.3633% ( 2) 00:36:46.889 17686.824 - 17792.103: 97.4008% ( 5) 00:36:46.889 17792.103 - 17897.382: 97.4384% ( 5) 00:36:46.889 17897.382 - 18002.660: 97.4760% ( 5) 00:36:46.889 18002.660 - 18107.939: 97.5511% ( 10) 00:36:46.889 18107.939 - 18213.218: 97.6337% ( 11) 00:36:46.889 18213.218 - 18318.496: 97.7088% ( 10) 00:36:46.889 18318.496 - 18423.775: 97.7689% ( 8) 00:36:46.889 18423.775 - 18529.054: 97.8140% ( 6) 00:36:46.889 18529.054 - 18634.333: 97.8591% ( 6) 00:36:46.889 18634.333 - 18739.611: 97.9041% ( 6) 00:36:46.889 18739.611 - 18844.890: 97.9567% ( 7) 00:36:46.889 18844.890 - 18950.169: 97.9943% ( 5) 00:36:46.889 18950.169 - 19055.447: 98.0394% ( 6) 00:36:46.889 19055.447 - 19160.726: 98.0769% ( 5) 00:36:46.889 20845.186 - 20950.464: 98.0844% ( 1) 00:36:46.889 21055.743 - 21161.022: 98.1295% ( 6) 00:36:46.889 21161.022 - 21266.300: 98.1896% ( 8) 00:36:46.889 21266.300 - 21371.579: 98.3023% ( 15) 00:36:46.889 21371.579 - 21476.858: 98.4826% ( 24) 00:36:46.889 21476.858 - 21582.137: 98.5427% ( 8) 00:36:46.889 21582.137 - 21687.415: 98.5577% ( 2) 00:36:46.889 26109.121 - 26214.400: 98.5652% ( 1) 00:36:46.889 26214.400 - 26319.679: 98.6328% ( 9) 00:36:46.889 26319.679 - 26424.957: 98.6704% ( 5) 00:36:46.889 26424.957 - 26530.236: 98.7079% ( 5) 00:36:46.889 26530.236 - 26635.515: 98.7530% ( 6) 00:36:46.889 26635.515 - 26740.794: 98.7981% ( 6) 00:36:46.889 26740.794 - 26846.072: 98.8431% ( 6) 00:36:46.889 26846.072 - 26951.351: 98.8882% ( 6) 00:36:46.889 26951.351 - 27161.908: 98.9709% ( 11) 00:36:46.889 27161.908 - 27372.466: 99.0385% ( 9) 00:36:46.889 29899.155 - 30109.712: 99.0986% ( 8) 00:36:46.889 30109.712 - 30320.270: 99.1587% ( 8) 00:36:46.889 30320.270 - 30530.827: 99.2188% ( 8) 00:36:46.889 30530.827 - 30741.385: 99.2713% ( 7) 00:36:46.889 30741.385 - 30951.942: 99.3389% ( 9) 00:36:46.889 30951.942 - 31162.500: 99.3915% ( 7) 00:36:46.889 31162.500 - 31373.057: 99.4516% ( 8) 00:36:46.889 31373.057 - 31583.614: 99.5042% ( 7) 00:36:46.889 31583.614 - 31794.172: 99.5192% ( 2) 00:36:46.889 37689.780 - 37900.337: 99.5267% ( 1) 00:36:46.889 37900.337 - 38110.895: 99.5868% ( 8) 00:36:46.889 38110.895 - 38321.452: 99.6394% ( 7) 00:36:46.889 38321.452 - 38532.010: 99.6995% ( 8) 00:36:46.889 38532.010 - 38742.567: 99.7521% ( 7) 00:36:46.889 38742.567 - 38953.124: 99.8122% ( 8) 00:36:46.889 38953.124 - 39163.682: 99.8648% ( 7) 00:36:46.889 39163.682 - 39374.239: 99.9249% ( 8) 00:36:46.889 39374.239 - 39584.797: 99.9775% ( 7) 00:36:46.889 39584.797 - 39795.354: 100.0000% ( 3) 00:36:46.889 00:36:46.889 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:36:46.889 ============================================================================== 00:36:46.889 Range in us Cumulative IO count 00:36:46.889 7316.871 - 7369.510: 0.0075% ( 1) 00:36:46.889 7369.510 - 7422.149: 0.0601% ( 7) 00:36:46.889 7422.149 - 7474.789: 0.1502% ( 12) 00:36:46.889 7474.789 - 7527.428: 0.2855% ( 18) 00:36:46.889 7527.428 - 7580.067: 0.5484% ( 35) 00:36:46.889 7580.067 - 7632.707: 0.8714% ( 43) 00:36:46.889 7632.707 - 7685.346: 1.3371% ( 62) 00:36:46.889 7685.346 - 7737.986: 2.0883% ( 100) 00:36:46.889 7737.986 - 7790.625: 3.0799% ( 132) 00:36:46.889 7790.625 - 7843.264: 4.3419% ( 168) 00:36:46.889 7843.264 - 7895.904: 5.9120% ( 209) 00:36:46.889 7895.904 - 7948.543: 7.3167% ( 187) 00:36:46.889 7948.543 - 8001.182: 8.5186% ( 160) 00:36:46.889 8001.182 - 8053.822: 10.4342% ( 255) 00:36:46.889 8053.822 - 8106.461: 12.1394% ( 227) 00:36:46.889 8106.461 - 8159.100: 14.3404% ( 293) 00:36:46.889 8159.100 - 8211.740: 16.7368% ( 319) 00:36:46.889 8211.740 - 8264.379: 19.2608% ( 336) 00:36:46.889 8264.379 - 8317.018: 22.0027% ( 365) 00:36:46.889 8317.018 - 8369.658: 24.9549% ( 393) 00:36:46.889 8369.658 - 8422.297: 27.9147% ( 394) 00:36:46.889 8422.297 - 8474.937: 30.9796% ( 408) 00:36:46.889 8474.937 - 8527.576: 33.6989% ( 362) 00:36:46.889 8527.576 - 8580.215: 36.6436% ( 392) 00:36:46.889 8580.215 - 8632.855: 39.1001% ( 327) 00:36:46.889 8632.855 - 8685.494: 42.2025% ( 413) 00:36:46.889 8685.494 - 8738.133: 44.6514% ( 326) 00:36:46.889 8738.133 - 8790.773: 47.5661% ( 388) 00:36:46.889 8790.773 - 8843.412: 50.3080% ( 365) 00:36:46.889 8843.412 - 8896.051: 52.8020% ( 332) 00:36:46.889 8896.051 - 8948.691: 55.3561% ( 340) 00:36:46.889 8948.691 - 9001.330: 57.8576% ( 333) 00:36:46.889 9001.330 - 9053.969: 60.3365% ( 330) 00:36:46.889 9053.969 - 9106.609: 62.8305% ( 332) 00:36:46.889 9106.609 - 9159.248: 65.1668% ( 311) 00:36:46.889 9159.248 - 9211.888: 67.1875% ( 269) 00:36:46.889 9211.888 - 9264.527: 68.9528% ( 235) 00:36:46.889 9264.527 - 9317.166: 70.3876% ( 191) 00:36:46.889 9317.166 - 9369.806: 71.6947% ( 174) 00:36:46.889 9369.806 - 9422.445: 73.2046% ( 201) 00:36:46.889 9422.445 - 9475.084: 74.3089% ( 147) 00:36:46.889 9475.084 - 9527.724: 75.6010% ( 172) 00:36:46.889 9527.724 - 9580.363: 76.4949% ( 119) 00:36:46.889 9580.363 - 9633.002: 77.2461% ( 100) 00:36:46.889 9633.002 - 9685.642: 78.2903% ( 139) 00:36:46.889 9685.642 - 9738.281: 79.1016% ( 108) 00:36:46.889 9738.281 - 9790.920: 79.6950% ( 79) 00:36:46.889 9790.920 - 9843.560: 80.2960% ( 80) 00:36:46.889 9843.560 - 9896.199: 80.8594% ( 75) 00:36:46.889 9896.199 - 9948.839: 81.4078% ( 73) 00:36:46.889 9948.839 - 10001.478: 82.0237% ( 82) 00:36:46.889 10001.478 - 10054.117: 82.6773% ( 87) 00:36:46.889 10054.117 - 10106.757: 83.1806% ( 67) 00:36:46.889 10106.757 - 10159.396: 83.5111% ( 44) 00:36:46.889 10159.396 - 10212.035: 83.7740% ( 35) 00:36:46.889 10212.035 - 10264.675: 84.0971% ( 43) 00:36:46.889 10264.675 - 10317.314: 84.5403% ( 59) 00:36:46.889 10317.314 - 10369.953: 84.9609% ( 56) 00:36:46.889 10369.953 - 10422.593: 85.3516% ( 52) 00:36:46.889 10422.593 - 10475.232: 85.6445% ( 39) 00:36:46.889 10475.232 - 10527.871: 85.8624% ( 29) 00:36:46.889 10527.871 - 10580.511: 86.0727% ( 28) 00:36:46.889 10580.511 - 10633.150: 86.3507% ( 37) 00:36:46.889 10633.150 - 10685.790: 86.5460% ( 26) 00:36:46.889 10685.790 - 10738.429: 86.7788% ( 31) 00:36:46.889 10738.429 - 10791.068: 87.1544% ( 50) 00:36:46.889 10791.068 - 10843.708: 87.3573% ( 27) 00:36:46.889 10843.708 - 10896.347: 87.5376% ( 24) 00:36:46.889 10896.347 - 10948.986: 87.8230% ( 38) 00:36:46.889 10948.986 - 11001.626: 88.2287% ( 54) 00:36:46.889 11001.626 - 11054.265: 88.6343% ( 54) 00:36:46.889 11054.265 - 11106.904: 88.9648% ( 44) 00:36:46.889 11106.904 - 11159.544: 89.1827% ( 29) 00:36:46.889 11159.544 - 11212.183: 89.3705% ( 25) 00:36:46.890 11212.183 - 11264.822: 89.5358% ( 22) 00:36:46.890 11264.822 - 11317.462: 89.7160% ( 24) 00:36:46.890 11317.462 - 11370.101: 89.9038% ( 25) 00:36:46.890 11370.101 - 11422.741: 90.1743% ( 36) 00:36:46.890 11422.741 - 11475.380: 90.4823% ( 41) 00:36:46.890 11475.380 - 11528.019: 90.8579% ( 50) 00:36:46.890 11528.019 - 11580.659: 91.1508% ( 39) 00:36:46.890 11580.659 - 11633.298: 91.3011% ( 20) 00:36:46.890 11633.298 - 11685.937: 91.4663% ( 22) 00:36:46.890 11685.937 - 11738.577: 91.5640% ( 13) 00:36:46.890 11738.577 - 11791.216: 91.6917% ( 17) 00:36:46.890 11791.216 - 11843.855: 91.9546% ( 35) 00:36:46.890 11843.855 - 11896.495: 92.1274% ( 23) 00:36:46.890 11896.495 - 11949.134: 92.2551% ( 17) 00:36:46.890 11949.134 - 12001.773: 92.3828% ( 17) 00:36:46.890 12001.773 - 12054.413: 92.5180% ( 18) 00:36:46.890 12054.413 - 12107.052: 92.5931% ( 10) 00:36:46.890 12107.052 - 12159.692: 92.6232% ( 4) 00:36:46.890 12159.692 - 12212.331: 92.6532% ( 4) 00:36:46.890 12212.331 - 12264.970: 92.7284% ( 10) 00:36:46.890 12264.970 - 12317.610: 92.7885% ( 8) 00:36:46.890 12317.610 - 12370.249: 92.8636% ( 10) 00:36:46.890 12370.249 - 12422.888: 93.0213% ( 21) 00:36:46.890 12422.888 - 12475.528: 93.1490% ( 17) 00:36:46.890 12475.528 - 12528.167: 93.4420% ( 39) 00:36:46.890 12528.167 - 12580.806: 93.6298% ( 25) 00:36:46.890 12580.806 - 12633.446: 93.7124% ( 11) 00:36:46.890 12633.446 - 12686.085: 93.7650% ( 7) 00:36:46.890 12686.085 - 12738.724: 93.8477% ( 11) 00:36:46.890 12738.724 - 12791.364: 93.9153% ( 9) 00:36:46.890 12791.364 - 12844.003: 94.0505% ( 18) 00:36:46.890 12844.003 - 12896.643: 94.1556% ( 14) 00:36:46.890 12896.643 - 12949.282: 94.2909% ( 18) 00:36:46.890 12949.282 - 13001.921: 94.4561% ( 22) 00:36:46.890 13001.921 - 13054.561: 94.5838% ( 17) 00:36:46.890 13054.561 - 13107.200: 94.7416% ( 21) 00:36:46.890 13107.200 - 13159.839: 94.8693% ( 17) 00:36:46.890 13159.839 - 13212.479: 94.9745% ( 14) 00:36:46.890 13212.479 - 13265.118: 95.0871% ( 15) 00:36:46.890 13265.118 - 13317.757: 95.1698% ( 11) 00:36:46.890 13317.757 - 13370.397: 95.2148% ( 6) 00:36:46.890 13370.397 - 13423.036: 95.2524% ( 5) 00:36:46.890 13423.036 - 13475.676: 95.2900% ( 5) 00:36:46.890 13475.676 - 13580.954: 95.3876% ( 13) 00:36:46.890 13580.954 - 13686.233: 95.4552% ( 9) 00:36:46.890 13686.233 - 13791.512: 95.5153% ( 8) 00:36:46.890 13791.512 - 13896.790: 95.5679% ( 7) 00:36:46.890 13896.790 - 14002.069: 95.6055% ( 5) 00:36:46.890 14002.069 - 14107.348: 95.6355% ( 4) 00:36:46.890 14107.348 - 14212.627: 95.6806% ( 6) 00:36:46.890 14212.627 - 14317.905: 95.6881% ( 1) 00:36:46.890 14317.905 - 14423.184: 95.6956% ( 1) 00:36:46.890 14528.463 - 14633.741: 95.7031% ( 1) 00:36:46.890 14633.741 - 14739.020: 95.7332% ( 4) 00:36:46.890 14739.020 - 14844.299: 95.8534% ( 16) 00:36:46.890 14844.299 - 14949.578: 96.0261% ( 23) 00:36:46.890 14949.578 - 15054.856: 96.1013% ( 10) 00:36:46.890 15054.856 - 15160.135: 96.1614% ( 8) 00:36:46.890 15160.135 - 15265.414: 96.2139% ( 7) 00:36:46.890 15265.414 - 15370.692: 96.2891% ( 10) 00:36:46.890 15370.692 - 15475.971: 96.4243% ( 18) 00:36:46.890 15475.971 - 15581.250: 96.4769% ( 7) 00:36:46.890 15581.250 - 15686.529: 96.5069% ( 4) 00:36:46.890 15686.529 - 15791.807: 96.5520% ( 6) 00:36:46.890 15791.807 - 15897.086: 96.5895% ( 5) 00:36:46.890 15897.086 - 16002.365: 96.6271% ( 5) 00:36:46.890 16002.365 - 16107.643: 96.6346% ( 1) 00:36:46.890 16634.037 - 16739.316: 96.7022% ( 9) 00:36:46.890 16739.316 - 16844.594: 96.7698% ( 9) 00:36:46.890 16844.594 - 16949.873: 96.8600% ( 12) 00:36:46.890 16949.873 - 17055.152: 96.9952% ( 18) 00:36:46.890 17055.152 - 17160.431: 97.2356% ( 32) 00:36:46.890 17160.431 - 17265.709: 97.3332% ( 13) 00:36:46.890 17265.709 - 17370.988: 97.3933% ( 8) 00:36:46.890 17370.988 - 17476.267: 97.4534% ( 8) 00:36:46.890 17476.267 - 17581.545: 97.5661% ( 15) 00:36:46.890 17581.545 - 17686.824: 97.6713% ( 14) 00:36:46.890 17686.824 - 17792.103: 97.7239% ( 7) 00:36:46.890 17792.103 - 17897.382: 97.7915% ( 9) 00:36:46.890 17897.382 - 18002.660: 97.8666% ( 10) 00:36:46.890 18002.660 - 18107.939: 97.9117% ( 6) 00:36:46.890 18107.939 - 18213.218: 97.9492% ( 5) 00:36:46.890 18213.218 - 18318.496: 97.9943% ( 6) 00:36:46.890 18318.496 - 18423.775: 98.0394% ( 6) 00:36:46.890 18423.775 - 18529.054: 98.0769% ( 5) 00:36:46.890 21897.973 - 22003.251: 98.0844% ( 1) 00:36:46.890 22003.251 - 22108.530: 98.1295% ( 6) 00:36:46.890 22108.530 - 22213.809: 98.2121% ( 11) 00:36:46.890 22213.809 - 22319.088: 98.4375% ( 30) 00:36:46.890 22319.088 - 22424.366: 98.5276% ( 12) 00:36:46.890 22424.366 - 22529.645: 98.5577% ( 4) 00:36:46.890 25266.892 - 25372.170: 98.6028% ( 6) 00:36:46.890 25372.170 - 25477.449: 98.6704% ( 9) 00:36:46.890 25477.449 - 25582.728: 98.6929% ( 3) 00:36:46.890 25582.728 - 25688.006: 98.7455% ( 7) 00:36:46.890 25688.006 - 25793.285: 98.7906% ( 6) 00:36:46.890 25793.285 - 25898.564: 98.8356% ( 6) 00:36:46.890 25898.564 - 26003.843: 98.8807% ( 6) 00:36:46.890 26003.843 - 26109.121: 98.9183% ( 5) 00:36:46.890 26109.121 - 26214.400: 98.9633% ( 6) 00:36:46.890 26214.400 - 26319.679: 99.0084% ( 6) 00:36:46.890 26319.679 - 26424.957: 99.0385% ( 4) 00:36:46.890 28214.696 - 28425.253: 99.0910% ( 7) 00:36:46.890 28425.253 - 28635.810: 99.1511% ( 8) 00:36:46.890 28635.810 - 28846.368: 99.2112% ( 8) 00:36:46.890 28846.368 - 29056.925: 99.2713% ( 8) 00:36:46.890 29056.925 - 29267.483: 99.3314% ( 8) 00:36:46.890 29267.483 - 29478.040: 99.3840% ( 7) 00:36:46.890 29478.040 - 29688.598: 99.4441% ( 8) 00:36:46.890 29688.598 - 29899.155: 99.4967% ( 7) 00:36:46.890 29899.155 - 30109.712: 99.5192% ( 3) 00:36:46.890 35794.763 - 36005.320: 99.5493% ( 4) 00:36:46.890 36005.320 - 36215.878: 99.6019% ( 7) 00:36:46.890 36215.878 - 36426.435: 99.6620% ( 8) 00:36:46.890 36426.435 - 36636.993: 99.7145% ( 7) 00:36:46.890 36636.993 - 36847.550: 99.7746% ( 8) 00:36:46.890 36847.550 - 37058.108: 99.8272% ( 7) 00:36:46.890 37058.108 - 37268.665: 99.8873% ( 8) 00:36:46.890 37268.665 - 37479.222: 99.9474% ( 8) 00:36:46.890 37479.222 - 37689.780: 100.0000% ( 7) 00:36:46.890 00:36:46.890 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:36:46.890 ============================================================================== 00:36:46.890 Range in us Cumulative IO count 00:36:46.890 4158.509 - 4184.829: 0.0300% ( 4) 00:36:46.890 4211.149 - 4237.468: 0.0376% ( 1) 00:36:46.890 4263.788 - 4290.108: 0.0451% ( 1) 00:36:46.890 4290.108 - 4316.427: 0.0526% ( 1) 00:36:46.890 4316.427 - 4342.747: 0.0601% ( 1) 00:36:46.890 4369.067 - 4395.386: 0.0676% ( 1) 00:36:46.890 6948.395 - 7001.035: 0.0751% ( 1) 00:36:46.890 7106.313 - 7158.953: 0.0826% ( 1) 00:36:46.890 7158.953 - 7211.592: 0.1052% ( 3) 00:36:46.890 7211.592 - 7264.231: 0.1502% ( 6) 00:36:46.890 7264.231 - 7316.871: 0.2103% ( 8) 00:36:46.890 7316.871 - 7369.510: 0.3080% ( 13) 00:36:46.890 7369.510 - 7422.149: 0.4958% ( 25) 00:36:46.890 7422.149 - 7474.789: 0.6761% ( 24) 00:36:46.890 7474.789 - 7527.428: 0.9615% ( 38) 00:36:46.890 7527.428 - 7580.067: 1.2395% ( 37) 00:36:46.890 7580.067 - 7632.707: 1.6451% ( 54) 00:36:46.890 7632.707 - 7685.346: 2.0508% ( 54) 00:36:46.890 7685.346 - 7737.986: 2.6292% ( 77) 00:36:46.890 7737.986 - 7790.625: 3.1926% ( 75) 00:36:46.890 7790.625 - 7843.264: 4.0865% ( 119) 00:36:46.890 7843.264 - 7895.904: 5.2809% ( 159) 00:36:46.890 7895.904 - 7948.543: 6.3627% ( 144) 00:36:46.890 7948.543 - 8001.182: 8.1280% ( 235) 00:36:46.890 8001.182 - 8053.822: 9.7656% ( 218) 00:36:46.890 8053.822 - 8106.461: 11.7112% ( 259) 00:36:46.890 8106.461 - 8159.100: 14.1226% ( 321) 00:36:46.890 8159.100 - 8211.740: 16.7668% ( 352) 00:36:46.890 8211.740 - 8264.379: 19.5913% ( 376) 00:36:46.890 8264.379 - 8317.018: 22.5586% ( 395) 00:36:46.890 8317.018 - 8369.658: 25.1127% ( 340) 00:36:46.890 8369.658 - 8422.297: 28.1550% ( 405) 00:36:46.890 8422.297 - 8474.937: 30.9570% ( 373) 00:36:46.890 8474.937 - 8527.576: 33.3909% ( 324) 00:36:46.890 8527.576 - 8580.215: 36.1028% ( 361) 00:36:46.890 8580.215 - 8632.855: 39.0625% ( 394) 00:36:46.890 8632.855 - 8685.494: 41.6917% ( 350) 00:36:46.890 8685.494 - 8738.133: 44.6289% ( 391) 00:36:46.891 8738.133 - 8790.773: 47.5661% ( 391) 00:36:46.891 8790.773 - 8843.412: 50.7136% ( 419) 00:36:46.891 8843.412 - 8896.051: 53.4856% ( 369) 00:36:46.891 8896.051 - 8948.691: 56.0171% ( 337) 00:36:46.891 8948.691 - 9001.330: 58.4886% ( 329) 00:36:46.891 9001.330 - 9053.969: 60.5093% ( 269) 00:36:46.891 9053.969 - 9106.609: 62.3648% ( 247) 00:36:46.891 9106.609 - 9159.248: 64.2803% ( 255) 00:36:46.891 9159.248 - 9211.888: 66.4062% ( 283) 00:36:46.891 9211.888 - 9264.527: 68.4946% ( 278) 00:36:46.891 9264.527 - 9317.166: 70.3350% ( 245) 00:36:46.891 9317.166 - 9369.806: 72.1229% ( 238) 00:36:46.891 9369.806 - 9422.445: 73.7455% ( 216) 00:36:46.891 9422.445 - 9475.084: 75.2855% ( 205) 00:36:46.891 9475.084 - 9527.724: 76.2921% ( 134) 00:36:46.891 9527.724 - 9580.363: 77.2536% ( 128) 00:36:46.891 9580.363 - 9633.002: 77.8546% ( 80) 00:36:46.891 9633.002 - 9685.642: 78.6959% ( 112) 00:36:46.891 9685.642 - 9738.281: 79.3119% ( 82) 00:36:46.891 9738.281 - 9790.920: 79.9204% ( 81) 00:36:46.891 9790.920 - 9843.560: 80.4763% ( 74) 00:36:46.891 9843.560 - 9896.199: 81.2200% ( 99) 00:36:46.891 9896.199 - 9948.839: 81.6632% ( 59) 00:36:46.891 9948.839 - 10001.478: 82.1965% ( 71) 00:36:46.891 10001.478 - 10054.117: 82.8651% ( 89) 00:36:46.891 10054.117 - 10106.757: 83.2782% ( 55) 00:36:46.891 10106.757 - 10159.396: 83.7290% ( 60) 00:36:46.891 10159.396 - 10212.035: 83.9694% ( 32) 00:36:46.891 10212.035 - 10264.675: 84.2849% ( 42) 00:36:46.891 10264.675 - 10317.314: 84.5628% ( 37) 00:36:46.891 10317.314 - 10369.953: 84.8032% ( 32) 00:36:46.891 10369.953 - 10422.593: 85.1037% ( 40) 00:36:46.891 10422.593 - 10475.232: 85.3741% ( 36) 00:36:46.891 10475.232 - 10527.871: 85.5844% ( 28) 00:36:46.891 10527.871 - 10580.511: 85.8699% ( 38) 00:36:46.891 10580.511 - 10633.150: 86.1854% ( 42) 00:36:46.891 10633.150 - 10685.790: 86.5159% ( 44) 00:36:46.891 10685.790 - 10738.429: 86.6436% ( 17) 00:36:46.891 10738.429 - 10791.068: 86.7413% ( 13) 00:36:46.891 10791.068 - 10843.708: 86.8239% ( 11) 00:36:46.891 10843.708 - 10896.347: 86.9066% ( 11) 00:36:46.891 10896.347 - 10948.986: 87.0192% ( 15) 00:36:46.891 10948.986 - 11001.626: 87.1394% ( 16) 00:36:46.891 11001.626 - 11054.265: 87.3197% ( 24) 00:36:46.891 11054.265 - 11106.904: 87.5225% ( 27) 00:36:46.891 11106.904 - 11159.544: 87.9657% ( 59) 00:36:46.891 11159.544 - 11212.183: 88.4240% ( 61) 00:36:46.891 11212.183 - 11264.822: 88.8371% ( 55) 00:36:46.891 11264.822 - 11317.462: 89.0775% ( 32) 00:36:46.891 11317.462 - 11370.101: 89.2879% ( 28) 00:36:46.891 11370.101 - 11422.741: 89.5057% ( 29) 00:36:46.891 11422.741 - 11475.380: 89.5959% ( 12) 00:36:46.891 11475.380 - 11528.019: 89.7236% ( 17) 00:36:46.891 11528.019 - 11580.659: 89.8588% ( 18) 00:36:46.891 11580.659 - 11633.298: 90.0541% ( 26) 00:36:46.891 11633.298 - 11685.937: 90.3546% ( 40) 00:36:46.891 11685.937 - 11738.577: 90.7903% ( 58) 00:36:46.891 11738.577 - 11791.216: 91.1133% ( 43) 00:36:46.891 11791.216 - 11843.855: 91.2710% ( 21) 00:36:46.891 11843.855 - 11896.495: 91.5790% ( 41) 00:36:46.891 11896.495 - 11949.134: 91.6541% ( 10) 00:36:46.891 11949.134 - 12001.773: 91.7668% ( 15) 00:36:46.891 12001.773 - 12054.413: 91.9020% ( 18) 00:36:46.891 12054.413 - 12107.052: 92.0373% ( 18) 00:36:46.891 12107.052 - 12159.692: 92.3002% ( 35) 00:36:46.891 12159.692 - 12212.331: 92.6307% ( 44) 00:36:46.891 12212.331 - 12264.970: 92.8561% ( 30) 00:36:46.891 12264.970 - 12317.610: 92.9838% ( 17) 00:36:46.891 12317.610 - 12370.249: 93.1040% ( 16) 00:36:46.891 12370.249 - 12422.888: 93.2392% ( 18) 00:36:46.891 12422.888 - 12475.528: 93.4796% ( 32) 00:36:46.891 12475.528 - 12528.167: 93.6073% ( 17) 00:36:46.891 12528.167 - 12580.806: 93.7275% ( 16) 00:36:46.891 12580.806 - 12633.446: 93.8326% ( 14) 00:36:46.891 12633.446 - 12686.085: 93.9153% ( 11) 00:36:46.891 12686.085 - 12738.724: 93.9829% ( 9) 00:36:46.891 12738.724 - 12791.364: 94.0430% ( 8) 00:36:46.891 12791.364 - 12844.003: 94.0655% ( 3) 00:36:46.891 12844.003 - 12896.643: 94.1031% ( 5) 00:36:46.891 12896.643 - 12949.282: 94.1707% ( 9) 00:36:46.891 12949.282 - 13001.921: 94.3134% ( 19) 00:36:46.891 13001.921 - 13054.561: 94.3810% ( 9) 00:36:46.891 13054.561 - 13107.200: 94.4712% ( 12) 00:36:46.891 13107.200 - 13159.839: 94.5763% ( 14) 00:36:46.891 13159.839 - 13212.479: 94.7341% ( 21) 00:36:46.891 13212.479 - 13265.118: 94.8543% ( 16) 00:36:46.891 13265.118 - 13317.757: 94.9594% ( 14) 00:36:46.891 13317.757 - 13370.397: 95.0270% ( 9) 00:36:46.891 13370.397 - 13423.036: 95.1097% ( 11) 00:36:46.891 13423.036 - 13475.676: 95.2224% ( 15) 00:36:46.891 13475.676 - 13580.954: 95.4552% ( 31) 00:36:46.891 13580.954 - 13686.233: 95.5604% ( 14) 00:36:46.891 13686.233 - 13791.512: 95.6430% ( 11) 00:36:46.891 13791.512 - 13896.790: 95.6656% ( 3) 00:36:46.891 13896.790 - 14002.069: 95.6806% ( 2) 00:36:46.891 14317.905 - 14423.184: 95.6881% ( 1) 00:36:46.891 14423.184 - 14528.463: 95.7707% ( 11) 00:36:46.891 14528.463 - 14633.741: 96.0261% ( 34) 00:36:46.891 14633.741 - 14739.020: 96.0637% ( 5) 00:36:46.891 14739.020 - 14844.299: 96.0938% ( 4) 00:36:46.891 14844.299 - 14949.578: 96.1088% ( 2) 00:36:46.891 14949.578 - 15054.856: 96.1313% ( 3) 00:36:46.891 15054.856 - 15160.135: 96.1538% ( 3) 00:36:46.891 15475.971 - 15581.250: 96.1689% ( 2) 00:36:46.891 15581.250 - 15686.529: 96.2064% ( 5) 00:36:46.891 15686.529 - 15791.807: 96.2515% ( 6) 00:36:46.891 15791.807 - 15897.086: 96.3942% ( 19) 00:36:46.891 15897.086 - 16002.365: 96.6421% ( 33) 00:36:46.891 16002.365 - 16107.643: 96.6947% ( 7) 00:36:46.891 16107.643 - 16212.922: 96.7398% ( 6) 00:36:46.891 16212.922 - 16318.201: 96.7924% ( 7) 00:36:46.891 16318.201 - 16423.480: 96.8675% ( 10) 00:36:46.891 16423.480 - 16528.758: 96.9050% ( 5) 00:36:46.891 16528.758 - 16634.037: 97.0177% ( 15) 00:36:46.891 16634.037 - 16739.316: 97.0928% ( 10) 00:36:46.891 16739.316 - 16844.594: 97.1755% ( 11) 00:36:46.891 16844.594 - 16949.873: 97.2506% ( 10) 00:36:46.891 16949.873 - 17055.152: 97.3332% ( 11) 00:36:46.891 17055.152 - 17160.431: 97.4384% ( 14) 00:36:46.891 17160.431 - 17265.709: 97.5736% ( 18) 00:36:46.891 17265.709 - 17370.988: 97.7013% ( 17) 00:36:46.891 17370.988 - 17476.267: 97.9192% ( 29) 00:36:46.891 17476.267 - 17581.545: 97.9943% ( 10) 00:36:46.891 17581.545 - 17686.824: 98.0619% ( 9) 00:36:46.891 17686.824 - 17792.103: 98.0769% ( 2) 00:36:46.891 22845.481 - 22950.760: 98.0844% ( 1) 00:36:46.891 22950.760 - 23056.039: 98.1145% ( 4) 00:36:46.891 23056.039 - 23161.317: 98.1671% ( 7) 00:36:46.891 23161.317 - 23266.596: 98.2272% ( 8) 00:36:46.891 23266.596 - 23371.875: 98.4751% ( 33) 00:36:46.891 23371.875 - 23477.153: 98.5352% ( 8) 00:36:46.891 23477.153 - 23582.432: 98.5577% ( 3) 00:36:46.891 24740.498 - 24845.777: 98.5727% ( 2) 00:36:46.891 24845.777 - 24951.055: 98.6478% ( 10) 00:36:46.891 24951.055 - 25056.334: 98.7004% ( 7) 00:36:46.891 25056.334 - 25161.613: 98.7305% ( 4) 00:36:46.891 25161.613 - 25266.892: 98.7680% ( 5) 00:36:46.891 25266.892 - 25372.170: 98.8131% ( 6) 00:36:46.891 25372.170 - 25477.449: 98.8582% ( 6) 00:36:46.891 25477.449 - 25582.728: 98.9032% ( 6) 00:36:46.891 25582.728 - 25688.006: 98.9483% ( 6) 00:36:46.891 25688.006 - 25793.285: 98.9934% ( 6) 00:36:46.891 25793.285 - 25898.564: 99.0385% ( 6) 00:36:46.891 26635.515 - 26740.794: 99.0610% ( 3) 00:36:46.891 26740.794 - 26846.072: 99.0910% ( 4) 00:36:46.891 26846.072 - 26951.351: 99.1211% ( 4) 00:36:46.891 26951.351 - 27161.908: 99.1812% ( 8) 00:36:46.891 27161.908 - 27372.466: 99.2413% ( 8) 00:36:46.891 27372.466 - 27583.023: 99.3014% ( 8) 00:36:46.891 27583.023 - 27793.581: 99.3615% ( 8) 00:36:46.891 27793.581 - 28004.138: 99.4216% ( 8) 00:36:46.891 28004.138 - 28214.696: 99.4817% ( 8) 00:36:46.891 28214.696 - 28425.253: 99.5192% ( 5) 00:36:46.891 33899.746 - 34110.304: 99.5418% ( 3) 00:36:46.891 34110.304 - 34320.861: 99.6019% ( 8) 00:36:46.891 34320.861 - 34531.418: 99.6544% ( 7) 00:36:46.891 34531.418 - 34741.976: 99.7145% ( 8) 00:36:46.891 34741.976 - 34952.533: 99.7671% ( 7) 00:36:46.891 34952.533 - 35163.091: 99.8272% ( 8) 00:36:46.891 35163.091 - 35373.648: 99.8798% ( 7) 00:36:46.891 35373.648 - 35584.206: 99.9399% ( 8) 00:36:46.891 35584.206 - 35794.763: 99.9925% ( 7) 00:36:46.891 35794.763 - 36005.320: 100.0000% ( 1) 00:36:46.891 00:36:46.891 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:36:46.891 ============================================================================== 00:36:46.891 Range in us Cumulative IO count 00:36:46.891 7158.953 - 7211.592: 0.0150% ( 2) 00:36:46.891 7211.592 - 7264.231: 0.0673% ( 7) 00:36:46.891 7264.231 - 7316.871: 0.1346% ( 9) 00:36:46.891 7316.871 - 7369.510: 0.2318% ( 13) 00:36:46.891 7369.510 - 7422.149: 0.4486% ( 29) 00:36:46.891 7422.149 - 7474.789: 0.5906% ( 19) 00:36:46.891 7474.789 - 7527.428: 0.7850% ( 26) 00:36:46.891 7527.428 - 7580.067: 1.0840% ( 40) 00:36:46.891 7580.067 - 7632.707: 1.4578% ( 50) 00:36:46.891 7632.707 - 7685.346: 1.9886% ( 71) 00:36:46.891 7685.346 - 7737.986: 2.4148% ( 57) 00:36:46.891 7737.986 - 7790.625: 3.2147% ( 107) 00:36:46.891 7790.625 - 7843.264: 4.3436% ( 151) 00:36:46.891 7843.264 - 7895.904: 5.2407% ( 120) 00:36:46.891 7895.904 - 7948.543: 6.4369% ( 160) 00:36:46.891 7948.543 - 8001.182: 8.0667% ( 218) 00:36:46.891 8001.182 - 8053.822: 9.7039% ( 219) 00:36:46.891 8053.822 - 8106.461: 11.5356% ( 245) 00:36:46.891 8106.461 - 8159.100: 13.4794% ( 260) 00:36:46.891 8159.100 - 8211.740: 15.6175% ( 286) 00:36:46.892 8211.740 - 8264.379: 17.8903% ( 304) 00:36:46.892 8264.379 - 8317.018: 20.7461% ( 382) 00:36:46.892 8317.018 - 8369.658: 23.6169% ( 384) 00:36:46.892 8369.658 - 8422.297: 27.0559% ( 460) 00:36:46.892 8422.297 - 8474.937: 30.2557% ( 428) 00:36:46.892 8474.937 - 8527.576: 33.4779% ( 431) 00:36:46.892 8527.576 - 8580.215: 36.5505% ( 411) 00:36:46.892 8580.215 - 8632.855: 39.3242% ( 371) 00:36:46.892 8632.855 - 8685.494: 41.9782% ( 355) 00:36:46.892 8685.494 - 8738.133: 45.2303% ( 435) 00:36:46.892 8738.133 - 8790.773: 48.1908% ( 396) 00:36:46.892 8790.773 - 8843.412: 51.2410% ( 408) 00:36:46.892 8843.412 - 8896.051: 54.0969% ( 382) 00:36:46.892 8896.051 - 8948.691: 56.9079% ( 376) 00:36:46.892 8948.691 - 9001.330: 59.3974% ( 333) 00:36:46.892 9001.330 - 9053.969: 61.3861% ( 266) 00:36:46.892 9053.969 - 9106.609: 63.0906% ( 228) 00:36:46.892 9106.609 - 9159.248: 65.1166% ( 271) 00:36:46.892 9159.248 - 9211.888: 66.8361% ( 230) 00:36:46.892 9211.888 - 9264.527: 68.6154% ( 238) 00:36:46.892 9264.527 - 9317.166: 70.5293% ( 256) 00:36:46.892 9317.166 - 9369.806: 72.5329% ( 268) 00:36:46.892 9369.806 - 9422.445: 74.1103% ( 211) 00:36:46.892 9422.445 - 9475.084: 75.3215% ( 162) 00:36:46.892 9475.084 - 9527.724: 76.0691% ( 100) 00:36:46.892 9527.724 - 9580.363: 76.8092% ( 99) 00:36:46.892 9580.363 - 9633.002: 77.3550% ( 73) 00:36:46.892 9633.002 - 9685.642: 78.0801% ( 97) 00:36:46.892 9685.642 - 9738.281: 79.2838% ( 161) 00:36:46.892 9738.281 - 9790.920: 80.3454% ( 142) 00:36:46.892 9790.920 - 9843.560: 81.3995% ( 141) 00:36:46.892 9843.560 - 9896.199: 82.1546% ( 101) 00:36:46.892 9896.199 - 9948.839: 82.5807% ( 57) 00:36:46.892 9948.839 - 10001.478: 82.9695% ( 52) 00:36:46.892 10001.478 - 10054.117: 83.2760% ( 41) 00:36:46.892 10054.117 - 10106.757: 83.7022% ( 57) 00:36:46.892 10106.757 - 10159.396: 83.9937% ( 39) 00:36:46.892 10159.396 - 10212.035: 84.1731% ( 24) 00:36:46.892 10212.035 - 10264.675: 84.3675% ( 26) 00:36:46.892 10264.675 - 10317.314: 84.5245% ( 21) 00:36:46.892 10317.314 - 10369.953: 84.7488% ( 30) 00:36:46.892 10369.953 - 10422.593: 84.9357% ( 25) 00:36:46.892 10422.593 - 10475.232: 85.2572% ( 43) 00:36:46.892 10475.232 - 10527.871: 85.3544% ( 13) 00:36:46.892 10527.871 - 10580.511: 85.4516% ( 13) 00:36:46.892 10580.511 - 10633.150: 85.5712% ( 16) 00:36:46.892 10633.150 - 10685.790: 85.6534% ( 11) 00:36:46.892 10685.790 - 10738.429: 85.7656% ( 15) 00:36:46.892 10738.429 - 10791.068: 86.0795% ( 42) 00:36:46.892 10791.068 - 10843.708: 86.2440% ( 22) 00:36:46.892 10843.708 - 10896.347: 86.3337% ( 12) 00:36:46.892 10896.347 - 10948.986: 86.5057% ( 23) 00:36:46.892 10948.986 - 11001.626: 86.8346% ( 44) 00:36:46.892 11001.626 - 11054.265: 87.0589% ( 30) 00:36:46.892 11054.265 - 11106.904: 87.3430% ( 38) 00:36:46.892 11106.904 - 11159.544: 87.6719% ( 44) 00:36:46.892 11159.544 - 11212.183: 87.8888% ( 29) 00:36:46.892 11212.183 - 11264.822: 88.1728% ( 38) 00:36:46.892 11264.822 - 11317.462: 88.4196% ( 33) 00:36:46.892 11317.462 - 11370.101: 88.6962% ( 37) 00:36:46.892 11370.101 - 11422.741: 89.0401% ( 46) 00:36:46.892 11422.741 - 11475.380: 89.4363% ( 53) 00:36:46.892 11475.380 - 11528.019: 89.8400% ( 54) 00:36:46.892 11528.019 - 11580.659: 90.0269% ( 25) 00:36:46.892 11580.659 - 11633.298: 90.1914% ( 22) 00:36:46.892 11633.298 - 11685.937: 90.3708% ( 24) 00:36:46.892 11685.937 - 11738.577: 90.4755% ( 14) 00:36:46.892 11738.577 - 11791.216: 90.5801% ( 14) 00:36:46.892 11791.216 - 11843.855: 90.6400% ( 8) 00:36:46.892 11843.855 - 11896.495: 90.7222% ( 11) 00:36:46.892 11896.495 - 11949.134: 90.8269% ( 14) 00:36:46.892 11949.134 - 12001.773: 91.0362% ( 28) 00:36:46.892 12001.773 - 12054.413: 91.2679% ( 31) 00:36:46.892 12054.413 - 12107.052: 91.4025% ( 18) 00:36:46.892 12107.052 - 12159.692: 91.5296% ( 17) 00:36:46.892 12159.692 - 12212.331: 91.6343% ( 14) 00:36:46.892 12212.331 - 12264.970: 91.7389% ( 14) 00:36:46.892 12264.970 - 12317.610: 91.8511% ( 15) 00:36:46.892 12317.610 - 12370.249: 91.9557% ( 14) 00:36:46.892 12370.249 - 12422.888: 92.0380% ( 11) 00:36:46.892 12422.888 - 12475.528: 92.1800% ( 19) 00:36:46.892 12475.528 - 12528.167: 92.3071% ( 17) 00:36:46.892 12528.167 - 12580.806: 92.4641% ( 21) 00:36:46.892 12580.806 - 12633.446: 92.6211% ( 21) 00:36:46.892 12633.446 - 12686.085: 92.7183% ( 13) 00:36:46.892 12686.085 - 12738.724: 92.8155% ( 13) 00:36:46.892 12738.724 - 12791.364: 92.9202% ( 14) 00:36:46.892 12791.364 - 12844.003: 93.0547% ( 18) 00:36:46.892 12844.003 - 12896.643: 93.2192% ( 22) 00:36:46.892 12896.643 - 12949.282: 93.4136% ( 26) 00:36:46.892 12949.282 - 13001.921: 93.9219% ( 68) 00:36:46.892 13001.921 - 13054.561: 94.1537% ( 31) 00:36:46.892 13054.561 - 13107.200: 94.2958% ( 19) 00:36:46.892 13107.200 - 13159.839: 94.4154% ( 16) 00:36:46.892 13159.839 - 13212.479: 94.5051% ( 12) 00:36:46.892 13212.479 - 13265.118: 94.5649% ( 8) 00:36:46.892 13265.118 - 13317.757: 94.6172% ( 7) 00:36:46.892 13317.757 - 13370.397: 94.6920% ( 10) 00:36:46.892 13370.397 - 13423.036: 94.7443% ( 7) 00:36:46.892 13423.036 - 13475.676: 94.8266% ( 11) 00:36:46.892 13475.676 - 13580.954: 94.9536% ( 17) 00:36:46.892 13580.954 - 13686.233: 95.1406% ( 25) 00:36:46.892 13686.233 - 13791.512: 95.3275% ( 25) 00:36:46.892 13791.512 - 13896.790: 95.3947% ( 9) 00:36:46.892 13896.790 - 14002.069: 95.4844% ( 12) 00:36:46.892 14002.069 - 14107.348: 95.6414% ( 21) 00:36:46.892 14107.348 - 14212.627: 95.8807% ( 32) 00:36:46.892 14212.627 - 14317.905: 96.0526% ( 23) 00:36:46.892 14317.905 - 14423.184: 96.0900% ( 5) 00:36:46.892 14423.184 - 14528.463: 96.1124% ( 3) 00:36:46.892 14528.463 - 14633.741: 96.1274% ( 2) 00:36:46.892 14633.741 - 14739.020: 96.1423% ( 2) 00:36:46.892 14739.020 - 14844.299: 96.1573% ( 2) 00:36:46.892 14844.299 - 14949.578: 96.1722% ( 2) 00:36:46.892 15160.135 - 15265.414: 96.1872% ( 2) 00:36:46.892 15265.414 - 15370.692: 96.2620% ( 10) 00:36:46.892 15370.692 - 15475.971: 96.3442% ( 11) 00:36:46.892 15475.971 - 15581.250: 96.3666% ( 3) 00:36:46.892 15581.250 - 15686.529: 96.3891% ( 3) 00:36:46.892 15686.529 - 15791.807: 96.4862% ( 13) 00:36:46.892 15791.807 - 15897.086: 96.5685% ( 11) 00:36:46.892 15897.086 - 16002.365: 96.6956% ( 17) 00:36:46.892 16002.365 - 16107.643: 96.8675% ( 23) 00:36:46.892 16107.643 - 16212.922: 97.0694% ( 27) 00:36:46.892 16212.922 - 16318.201: 97.2937% ( 30) 00:36:46.892 16318.201 - 16423.480: 97.4133% ( 16) 00:36:46.892 16423.480 - 16528.758: 97.4731% ( 8) 00:36:46.892 16528.758 - 16634.037: 97.5179% ( 6) 00:36:46.892 16634.037 - 16739.316: 97.5628% ( 6) 00:36:46.892 16739.316 - 16844.594: 97.6002% ( 5) 00:36:46.892 16844.594 - 16949.873: 97.6077% ( 1) 00:36:46.892 17370.988 - 17476.267: 97.6151% ( 1) 00:36:46.892 17476.267 - 17581.545: 97.6749% ( 8) 00:36:46.892 17581.545 - 17686.824: 97.7422% ( 9) 00:36:46.892 17686.824 - 17792.103: 97.8618% ( 16) 00:36:46.892 17792.103 - 17897.382: 98.0114% ( 20) 00:36:46.892 17897.382 - 18002.660: 98.0562% ( 6) 00:36:46.892 18002.660 - 18107.939: 98.0861% ( 4) 00:36:46.892 18634.333 - 18739.611: 98.1011% ( 2) 00:36:46.892 18739.611 - 18844.890: 98.1235% ( 3) 00:36:46.892 18844.890 - 18950.169: 98.1534% ( 4) 00:36:46.892 18950.169 - 19055.447: 98.1833% ( 4) 00:36:46.892 19055.447 - 19160.726: 98.2207% ( 5) 00:36:46.892 19160.726 - 19266.005: 98.2506% ( 4) 00:36:46.892 19266.005 - 19371.284: 98.2805% ( 4) 00:36:46.892 19371.284 - 19476.562: 98.3029% ( 3) 00:36:46.892 19476.562 - 19581.841: 98.3328% ( 4) 00:36:46.892 19581.841 - 19687.120: 98.3627% ( 4) 00:36:46.892 19687.120 - 19792.398: 98.3926% ( 4) 00:36:46.892 19792.398 - 19897.677: 98.4300% ( 5) 00:36:46.892 19897.677 - 20002.956: 98.4599% ( 4) 00:36:46.892 20002.956 - 20108.235: 98.4898% ( 4) 00:36:46.892 20108.235 - 20213.513: 98.5197% ( 4) 00:36:46.892 20213.513 - 20318.792: 98.5496% ( 4) 00:36:46.892 20318.792 - 20424.071: 98.5646% ( 2) 00:36:46.892 23792.990 - 23898.268: 98.6020% ( 5) 00:36:46.892 23898.268 - 24003.547: 98.6543% ( 7) 00:36:46.892 24003.547 - 24108.826: 98.7216% ( 9) 00:36:46.892 24108.826 - 24214.104: 98.8412% ( 16) 00:36:46.892 24214.104 - 24319.383: 99.0206% ( 24) 00:36:46.892 24319.383 - 24424.662: 99.1403% ( 16) 00:36:46.892 24424.662 - 24529.941: 99.1926% ( 7) 00:36:46.892 24529.941 - 24635.219: 99.2374% ( 6) 00:36:46.892 24635.219 - 24740.498: 99.2823% ( 6) 00:36:46.892 24740.498 - 24845.777: 99.3272% ( 6) 00:36:46.892 24845.777 - 24951.055: 99.3795% ( 7) 00:36:46.892 24951.055 - 25056.334: 99.4243% ( 6) 00:36:46.892 25056.334 - 25161.613: 99.4692% ( 6) 00:36:46.892 25161.613 - 25266.892: 99.5141% ( 6) 00:36:46.892 25266.892 - 25372.170: 99.5215% ( 1) 00:36:46.892 26214.400 - 26319.679: 99.5290% ( 1) 00:36:46.892 26319.679 - 26424.957: 99.5514% ( 3) 00:36:46.892 26424.957 - 26530.236: 99.5813% ( 4) 00:36:46.892 26530.236 - 26635.515: 99.6112% ( 4) 00:36:46.892 26635.515 - 26740.794: 99.6337% ( 3) 00:36:46.892 26740.794 - 26846.072: 99.6711% ( 5) 00:36:46.892 26846.072 - 26951.351: 99.7010% ( 4) 00:36:46.892 26951.351 - 27161.908: 99.7533% ( 7) 00:36:46.892 27161.908 - 27372.466: 99.8206% ( 9) 00:36:46.892 27372.466 - 27583.023: 99.8729% ( 7) 00:36:46.892 27583.023 - 27793.581: 99.9327% ( 8) 00:36:46.892 27793.581 - 28004.138: 99.9925% ( 8) 00:36:46.892 28004.138 - 28214.696: 100.0000% ( 1) 00:36:46.892 00:36:46.892 17:33:47 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:36:46.892 00:36:46.892 real 0m2.699s 00:36:46.892 user 0m2.292s 00:36:46.892 sys 0m0.306s 00:36:46.892 17:33:47 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.892 17:33:47 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:36:46.893 ************************************ 00:36:46.893 END TEST nvme_perf 00:36:46.893 ************************************ 00:36:46.893 17:33:47 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:36:46.893 17:33:47 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:46.893 17:33:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:46.893 17:33:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:46.893 ************************************ 00:36:46.893 START TEST nvme_hello_world 00:36:46.893 ************************************ 00:36:46.893 17:33:47 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:36:47.152 Initializing NVMe Controllers 00:36:47.152 Attached to 0000:00:10.0 00:36:47.152 Namespace ID: 1 size: 6GB 00:36:47.152 Attached to 0000:00:11.0 00:36:47.152 Namespace ID: 1 size: 5GB 00:36:47.152 Attached to 0000:00:13.0 00:36:47.152 Namespace ID: 1 size: 1GB 00:36:47.152 Attached to 0000:00:12.0 00:36:47.152 Namespace ID: 1 size: 4GB 00:36:47.152 Namespace ID: 2 size: 4GB 00:36:47.152 Namespace ID: 3 size: 4GB 00:36:47.152 Initialization complete. 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 INFO: using host memory buffer for IO 00:36:47.152 Hello world! 00:36:47.152 00:36:47.152 real 0m0.319s 00:36:47.152 user 0m0.108s 00:36:47.152 sys 0m0.164s 00:36:47.152 17:33:47 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.152 17:33:47 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:36:47.152 ************************************ 00:36:47.152 END TEST nvme_hello_world 00:36:47.152 ************************************ 00:36:47.152 17:33:47 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:36:47.152 17:33:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.152 17:33:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.152 17:33:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:47.152 ************************************ 00:36:47.152 START TEST nvme_sgl 00:36:47.152 ************************************ 00:36:47.152 17:33:47 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:36:47.412 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:36:47.412 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:36:47.412 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:36:47.412 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:36:47.412 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:36:47.412 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:36:47.412 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:36:47.412 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:36:47.412 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:36:47.672 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:36:47.672 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:36:47.672 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:36:47.672 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:36:47.672 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:36:47.672 NVMe Readv/Writev Request test 00:36:47.672 Attached to 0000:00:10.0 00:36:47.672 Attached to 0000:00:11.0 00:36:47.672 Attached to 0000:00:13.0 00:36:47.672 Attached to 0000:00:12.0 00:36:47.672 0000:00:10.0: build_io_request_2 test passed 00:36:47.672 0000:00:10.0: build_io_request_4 test passed 00:36:47.672 0000:00:10.0: build_io_request_5 test passed 00:36:47.672 0000:00:10.0: build_io_request_6 test passed 00:36:47.672 0000:00:10.0: build_io_request_7 test passed 00:36:47.672 0000:00:10.0: build_io_request_10 test passed 00:36:47.672 0000:00:11.0: build_io_request_2 test passed 00:36:47.672 0000:00:11.0: build_io_request_4 test passed 00:36:47.672 0000:00:11.0: build_io_request_5 test passed 00:36:47.672 0000:00:11.0: build_io_request_6 test passed 00:36:47.672 0000:00:11.0: build_io_request_7 test passed 00:36:47.672 0000:00:11.0: build_io_request_10 test passed 00:36:47.672 Cleaning up... 00:36:47.672 00:36:47.672 real 0m0.385s 00:36:47.672 user 0m0.174s 00:36:47.672 sys 0m0.163s 00:36:47.672 17:33:48 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.672 17:33:48 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:36:47.672 ************************************ 00:36:47.672 END TEST nvme_sgl 00:36:47.672 ************************************ 00:36:47.672 17:33:48 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:36:47.672 17:33:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.672 17:33:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.672 17:33:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:47.672 ************************************ 00:36:47.672 START TEST nvme_e2edp 00:36:47.672 ************************************ 00:36:47.672 17:33:48 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:36:47.931 NVMe Write/Read with End-to-End data protection test 00:36:47.931 Attached to 0000:00:10.0 00:36:47.931 Attached to 0000:00:11.0 00:36:47.931 Attached to 0000:00:13.0 00:36:47.931 Attached to 0000:00:12.0 00:36:47.931 Cleaning up... 00:36:47.931 00:36:47.931 real 0m0.310s 00:36:47.931 user 0m0.101s 00:36:47.931 sys 0m0.157s 00:36:47.931 17:33:48 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.931 ************************************ 00:36:47.931 END TEST nvme_e2edp 00:36:47.931 ************************************ 00:36:47.931 17:33:48 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:36:47.931 17:33:48 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:36:47.931 17:33:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.931 17:33:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.931 17:33:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:48.190 ************************************ 00:36:48.190 START TEST nvme_reserve 00:36:48.190 ************************************ 00:36:48.190 17:33:48 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:36:48.450 ===================================================== 00:36:48.450 NVMe Controller at PCI bus 0, device 16, function 0 00:36:48.450 ===================================================== 00:36:48.450 Reservations: Not Supported 00:36:48.450 ===================================================== 00:36:48.450 NVMe Controller at PCI bus 0, device 17, function 0 00:36:48.450 ===================================================== 00:36:48.450 Reservations: Not Supported 00:36:48.450 ===================================================== 00:36:48.450 NVMe Controller at PCI bus 0, device 19, function 0 00:36:48.450 ===================================================== 00:36:48.450 Reservations: Not Supported 00:36:48.450 ===================================================== 00:36:48.450 NVMe Controller at PCI bus 0, device 18, function 0 00:36:48.450 ===================================================== 00:36:48.450 Reservations: Not Supported 00:36:48.450 Reservation test passed 00:36:48.450 00:36:48.450 real 0m0.339s 00:36:48.450 user 0m0.105s 00:36:48.450 sys 0m0.184s 00:36:48.450 17:33:48 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.450 17:33:48 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:36:48.450 ************************************ 00:36:48.450 END TEST nvme_reserve 00:36:48.450 ************************************ 00:36:48.450 17:33:49 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:36:48.450 17:33:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:48.450 17:33:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.450 17:33:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:48.450 ************************************ 00:36:48.450 START TEST nvme_err_injection 00:36:48.450 ************************************ 00:36:48.450 17:33:49 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:36:48.709 NVMe Error Injection test 00:36:48.709 Attached to 0000:00:10.0 00:36:48.709 Attached to 0000:00:11.0 00:36:48.709 Attached to 0000:00:13.0 00:36:48.709 Attached to 0000:00:12.0 00:36:48.709 0000:00:10.0: get features failed as expected 00:36:48.709 0000:00:11.0: get features failed as expected 00:36:48.709 0000:00:13.0: get features failed as expected 00:36:48.709 0000:00:12.0: get features failed as expected 00:36:48.709 0000:00:10.0: get features successfully as expected 00:36:48.709 0000:00:11.0: get features successfully as expected 00:36:48.709 0000:00:13.0: get features successfully as expected 00:36:48.709 0000:00:12.0: get features successfully as expected 00:36:48.709 0000:00:11.0: read failed as expected 00:36:48.709 0000:00:10.0: read failed as expected 00:36:48.709 0000:00:13.0: read failed as expected 00:36:48.709 0000:00:12.0: read failed as expected 00:36:48.709 0000:00:11.0: read successfully as expected 00:36:48.709 0000:00:10.0: read successfully as expected 00:36:48.709 0000:00:13.0: read successfully as expected 00:36:48.709 0000:00:12.0: read successfully as expected 00:36:48.709 Cleaning up... 00:36:48.709 00:36:48.709 real 0m0.320s 00:36:48.709 user 0m0.108s 00:36:48.709 sys 0m0.166s 00:36:48.709 17:33:49 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.709 17:33:49 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:36:48.709 ************************************ 00:36:48.709 END TEST nvme_err_injection 00:36:48.709 ************************************ 00:36:48.967 17:33:49 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:36:48.967 17:33:49 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:36:48.967 17:33:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:48.968 17:33:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:48.968 ************************************ 00:36:48.968 START TEST nvme_overhead 00:36:48.968 ************************************ 00:36:48.968 17:33:49 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:36:50.347 Initializing NVMe Controllers 00:36:50.347 Attached to 0000:00:10.0 00:36:50.347 Attached to 0000:00:11.0 00:36:50.347 Attached to 0000:00:13.0 00:36:50.347 Attached to 0000:00:12.0 00:36:50.347 Initialization complete. Launching workers. 00:36:50.347 submit (in ns) avg, min, max = 14020.0, 12540.6, 104445.0 00:36:50.347 complete (in ns) avg, min, max = 9180.4, 8349.4, 108869.9 00:36:50.347 00:36:50.347 Submit histogram 00:36:50.347 ================ 00:36:50.347 Range in us Cumulative Count 00:36:50.347 12.492 - 12.543: 0.0174% ( 1) 00:36:50.347 12.594 - 12.646: 0.0522% ( 2) 00:36:50.347 12.697 - 12.749: 0.0871% ( 2) 00:36:50.347 12.749 - 12.800: 0.2438% ( 9) 00:36:50.347 12.800 - 12.851: 0.7313% ( 28) 00:36:50.347 12.851 - 12.903: 1.1841% ( 26) 00:36:50.347 12.903 - 12.954: 1.9676% ( 45) 00:36:50.347 12.954 - 13.006: 3.1691% ( 69) 00:36:50.347 13.006 - 13.057: 4.6143% ( 83) 00:36:50.347 13.057 - 13.108: 6.7038% ( 120) 00:36:50.347 13.108 - 13.160: 10.1863% ( 200) 00:36:50.347 13.160 - 13.263: 18.6662% ( 487) 00:36:50.347 13.263 - 13.365: 31.0813% ( 713) 00:36:50.347 13.365 - 13.468: 42.0512% ( 630) 00:36:50.347 13.468 - 13.571: 52.5161% ( 601) 00:36:50.347 13.571 - 13.674: 60.8741% ( 480) 00:36:50.347 13.674 - 13.777: 68.9361% ( 463) 00:36:50.347 13.777 - 13.880: 77.2593% ( 478) 00:36:50.347 13.880 - 13.982: 84.2069% ( 399) 00:36:50.347 13.982 - 14.085: 88.0724% ( 222) 00:36:50.347 14.085 - 14.188: 90.4579% ( 137) 00:36:50.347 14.188 - 14.291: 91.7116% ( 72) 00:36:50.347 14.291 - 14.394: 92.4081% ( 40) 00:36:50.347 14.394 - 14.496: 93.0350% ( 36) 00:36:50.347 14.496 - 14.599: 93.4703% ( 25) 00:36:50.347 14.599 - 14.702: 93.7489% ( 16) 00:36:50.347 14.702 - 14.805: 93.8708% ( 7) 00:36:50.347 14.805 - 14.908: 93.9579% ( 5) 00:36:50.347 14.908 - 15.010: 94.0623% ( 6) 00:36:50.347 15.010 - 15.113: 94.0972% ( 2) 00:36:50.347 15.113 - 15.216: 94.1146% ( 1) 00:36:50.347 15.216 - 15.319: 94.1668% ( 3) 00:36:50.347 15.422 - 15.524: 94.2016% ( 2) 00:36:50.347 15.833 - 15.936: 94.2365% ( 2) 00:36:50.347 16.039 - 16.141: 94.2539% ( 1) 00:36:50.347 16.244 - 16.347: 94.2713% ( 1) 00:36:50.347 16.347 - 16.450: 94.2887% ( 1) 00:36:50.347 16.450 - 16.553: 94.3061% ( 1) 00:36:50.347 16.655 - 16.758: 94.3583% ( 3) 00:36:50.347 16.758 - 16.861: 94.4106% ( 3) 00:36:50.347 16.964 - 17.067: 94.4280% ( 1) 00:36:50.347 17.169 - 17.272: 94.4628% ( 2) 00:36:50.347 17.272 - 17.375: 94.4976% ( 2) 00:36:50.347 17.375 - 17.478: 94.5499% ( 3) 00:36:50.347 17.478 - 17.581: 94.5673% ( 1) 00:36:50.347 17.581 - 17.684: 94.7066% ( 8) 00:36:50.347 17.684 - 17.786: 94.8807% ( 10) 00:36:50.347 17.786 - 17.889: 94.9852% ( 6) 00:36:50.347 17.889 - 17.992: 95.0548% ( 4) 00:36:50.347 17.992 - 18.095: 95.2464% ( 11) 00:36:50.347 18.095 - 18.198: 95.4205% ( 10) 00:36:50.347 18.198 - 18.300: 95.5424% ( 7) 00:36:50.347 18.300 - 18.403: 95.7513% ( 12) 00:36:50.347 18.403 - 18.506: 95.9429% ( 11) 00:36:50.347 18.506 - 18.609: 96.1170% ( 10) 00:36:50.347 18.609 - 18.712: 96.2041% ( 5) 00:36:50.347 18.712 - 18.814: 96.3608% ( 9) 00:36:50.347 18.814 - 18.917: 96.5001% ( 8) 00:36:50.347 18.917 - 19.020: 96.7439% ( 14) 00:36:50.347 19.020 - 19.123: 96.9006% ( 9) 00:36:50.347 19.123 - 19.226: 97.0921% ( 11) 00:36:50.347 19.226 - 19.329: 97.1792% ( 5) 00:36:50.347 19.329 - 19.431: 97.3359% ( 9) 00:36:50.347 19.431 - 19.534: 97.5448% ( 12) 00:36:50.347 19.534 - 19.637: 97.6667% ( 7) 00:36:50.348 19.637 - 19.740: 97.8583% ( 11) 00:36:50.348 19.740 - 19.843: 97.9279% ( 4) 00:36:50.348 19.843 - 19.945: 97.9976% ( 4) 00:36:50.348 19.945 - 20.048: 98.0150% ( 1) 00:36:50.348 20.048 - 20.151: 98.0846% ( 4) 00:36:50.348 20.151 - 20.254: 98.1194% ( 2) 00:36:50.348 20.254 - 20.357: 98.2413% ( 7) 00:36:50.348 20.357 - 20.459: 98.2587% ( 1) 00:36:50.348 20.459 - 20.562: 98.2936% ( 2) 00:36:50.348 20.562 - 20.665: 98.3458% ( 3) 00:36:50.348 20.665 - 20.768: 98.4329% ( 5) 00:36:50.348 20.768 - 20.871: 98.4851% ( 3) 00:36:50.348 20.871 - 20.973: 98.5199% ( 2) 00:36:50.348 21.076 - 21.179: 98.5548% ( 2) 00:36:50.348 21.179 - 21.282: 98.5896% ( 2) 00:36:50.348 21.282 - 21.385: 98.6592% ( 4) 00:36:50.348 21.385 - 21.488: 98.6766% ( 1) 00:36:50.348 21.488 - 21.590: 98.7115% ( 2) 00:36:50.348 21.590 - 21.693: 98.7463% ( 2) 00:36:50.348 21.693 - 21.796: 98.7811% ( 2) 00:36:50.348 21.796 - 21.899: 98.8159% ( 2) 00:36:50.348 21.899 - 22.002: 98.8508% ( 2) 00:36:50.348 22.002 - 22.104: 98.8856% ( 2) 00:36:50.348 22.104 - 22.207: 98.9204% ( 2) 00:36:50.348 22.207 - 22.310: 98.9378% ( 1) 00:36:50.348 22.310 - 22.413: 98.9552% ( 1) 00:36:50.348 22.516 - 22.618: 98.9727% ( 1) 00:36:50.348 22.721 - 22.824: 99.0075% ( 2) 00:36:50.348 22.824 - 22.927: 99.0249% ( 1) 00:36:50.348 22.927 - 23.030: 99.0423% ( 1) 00:36:50.348 23.338 - 23.441: 99.0597% ( 1) 00:36:50.348 23.647 - 23.749: 99.0771% ( 1) 00:36:50.348 23.749 - 23.852: 99.0945% ( 1) 00:36:50.348 23.852 - 23.955: 99.1120% ( 1) 00:36:50.348 23.955 - 24.058: 99.1468% ( 2) 00:36:50.348 24.058 - 24.161: 99.1816% ( 2) 00:36:50.348 24.161 - 24.263: 99.1990% ( 1) 00:36:50.348 24.263 - 24.366: 99.2338% ( 2) 00:36:50.348 24.366 - 24.469: 99.2513% ( 1) 00:36:50.348 24.469 - 24.572: 99.2687% ( 1) 00:36:50.348 24.572 - 24.675: 99.2861% ( 1) 00:36:50.348 24.778 - 24.880: 99.3035% ( 1) 00:36:50.348 24.983 - 25.086: 99.3383% ( 2) 00:36:50.348 25.086 - 25.189: 99.4080% ( 4) 00:36:50.348 25.497 - 25.600: 99.4428% ( 2) 00:36:50.348 25.806 - 25.908: 99.4602% ( 1) 00:36:50.348 26.217 - 26.320: 99.4776% ( 1) 00:36:50.348 26.320 - 26.525: 99.4950% ( 1) 00:36:50.348 26.731 - 26.937: 99.5124% ( 1) 00:36:50.348 27.965 - 28.170: 99.5299% ( 1) 00:36:50.348 28.582 - 28.787: 99.5473% ( 1) 00:36:50.348 28.787 - 28.993: 99.5821% ( 2) 00:36:50.348 29.404 - 29.610: 99.5995% ( 1) 00:36:50.348 29.610 - 29.815: 99.6343% ( 2) 00:36:50.348 29.815 - 30.021: 99.6517% ( 1) 00:36:50.348 30.227 - 30.432: 99.6692% ( 1) 00:36:50.348 30.432 - 30.638: 99.6866% ( 1) 00:36:50.348 30.638 - 30.843: 99.7388% ( 3) 00:36:50.348 31.049 - 31.255: 99.7736% ( 2) 00:36:50.348 31.460 - 31.666: 99.7910% ( 1) 00:36:50.348 31.871 - 32.077: 99.8085% ( 1) 00:36:50.348 32.488 - 32.694: 99.8259% ( 1) 00:36:50.348 32.900 - 33.105: 99.8433% ( 1) 00:36:50.348 33.311 - 33.516: 99.8607% ( 1) 00:36:50.348 51.611 - 51.817: 99.8781% ( 1) 00:36:50.348 55.929 - 56.341: 99.8955% ( 1) 00:36:50.348 57.163 - 57.574: 99.9129% ( 1) 00:36:50.348 59.631 - 60.042: 99.9303% ( 1) 00:36:50.348 69.089 - 69.500: 99.9478% ( 1) 00:36:50.348 73.613 - 74.024: 99.9652% ( 1) 00:36:50.348 98.699 - 99.110: 99.9826% ( 1) 00:36:50.348 104.045 - 104.456: 100.0000% ( 1) 00:36:50.348 00:36:50.348 Complete histogram 00:36:50.348 ================== 00:36:50.348 Range in us Cumulative Count 00:36:50.348 8.328 - 8.379: 0.0174% ( 1) 00:36:50.348 8.379 - 8.431: 0.0871% ( 4) 00:36:50.348 8.431 - 8.482: 0.2438% ( 9) 00:36:50.348 8.482 - 8.533: 0.6791% ( 25) 00:36:50.348 8.533 - 8.585: 2.2288% ( 89) 00:36:50.348 8.585 - 8.636: 5.9203% ( 212) 00:36:50.348 8.636 - 8.688: 11.3530% ( 312) 00:36:50.348 8.688 - 8.739: 16.5245% ( 297) 00:36:50.348 8.739 - 8.790: 25.7183% ( 528) 00:36:50.348 8.790 - 8.842: 38.6035% ( 740) 00:36:50.348 8.842 - 8.893: 50.4788% ( 682) 00:36:50.348 8.893 - 8.945: 59.6726% ( 528) 00:36:50.348 8.945 - 8.996: 66.1153% ( 370) 00:36:50.348 8.996 - 9.047: 72.3141% ( 356) 00:36:50.348 9.047 - 9.099: 77.7642% ( 313) 00:36:50.348 9.099 - 9.150: 82.2218% ( 256) 00:36:50.348 9.150 - 9.202: 86.1048% ( 223) 00:36:50.348 9.202 - 9.253: 88.8386% ( 157) 00:36:50.348 9.253 - 9.304: 90.4928% ( 95) 00:36:50.348 9.304 - 9.356: 91.7813% ( 74) 00:36:50.348 9.356 - 9.407: 92.9479% ( 67) 00:36:50.348 9.407 - 9.459: 93.8534% ( 52) 00:36:50.348 9.459 - 9.510: 94.5325% ( 39) 00:36:50.348 9.510 - 9.561: 95.0374% ( 29) 00:36:50.348 9.561 - 9.613: 95.3160% ( 16) 00:36:50.348 9.613 - 9.664: 95.7339% ( 24) 00:36:50.348 9.664 - 9.716: 95.9777% ( 14) 00:36:50.348 9.716 - 9.767: 96.1692% ( 11) 00:36:50.348 9.767 - 9.818: 96.3782% ( 12) 00:36:50.348 9.818 - 9.870: 96.5871% ( 12) 00:36:50.348 9.870 - 9.921: 96.7090% ( 7) 00:36:50.348 9.921 - 9.973: 96.7613% ( 3) 00:36:50.348 9.973 - 10.024: 96.8657% ( 6) 00:36:50.348 10.024 - 10.076: 96.9528% ( 5) 00:36:50.348 10.076 - 10.127: 97.0399% ( 5) 00:36:50.348 10.127 - 10.178: 97.0747% ( 2) 00:36:50.348 10.178 - 10.230: 97.0921% ( 1) 00:36:50.348 10.230 - 10.281: 97.1269% ( 2) 00:36:50.348 10.333 - 10.384: 97.1443% ( 1) 00:36:50.348 10.435 - 10.487: 97.1618% ( 1) 00:36:50.348 10.487 - 10.538: 97.1792% ( 1) 00:36:50.348 10.590 - 10.641: 97.1966% ( 1) 00:36:50.348 10.692 - 10.744: 97.2140% ( 1) 00:36:50.348 10.744 - 10.795: 97.2314% ( 1) 00:36:50.348 10.795 - 10.847: 97.2488% ( 1) 00:36:50.348 10.898 - 10.949: 97.2662% ( 1) 00:36:50.348 10.949 - 11.001: 97.2836% ( 1) 00:36:50.348 11.155 - 11.206: 97.3011% ( 1) 00:36:50.348 11.258 - 11.309: 97.3359% ( 2) 00:36:50.348 11.309 - 11.361: 97.3533% ( 1) 00:36:50.348 11.412 - 11.463: 97.3707% ( 1) 00:36:50.348 11.515 - 11.566: 97.4055% ( 2) 00:36:50.348 11.566 - 11.618: 97.4229% ( 1) 00:36:50.348 11.618 - 11.669: 97.4404% ( 1) 00:36:50.348 11.669 - 11.720: 97.4578% ( 1) 00:36:50.348 11.720 - 11.772: 97.4926% ( 2) 00:36:50.348 11.823 - 11.875: 97.5100% ( 1) 00:36:50.348 11.978 - 12.029: 97.5274% ( 1) 00:36:50.348 12.029 - 12.080: 97.5448% ( 1) 00:36:50.348 12.235 - 12.286: 97.5622% ( 1) 00:36:50.348 12.594 - 12.646: 97.5797% ( 1) 00:36:50.348 12.646 - 12.697: 97.5971% ( 1) 00:36:50.348 13.263 - 13.365: 97.6145% ( 1) 00:36:50.348 13.365 - 13.468: 97.6319% ( 1) 00:36:50.348 13.571 - 13.674: 97.6493% ( 1) 00:36:50.348 14.085 - 14.188: 97.6667% ( 1) 00:36:50.349 14.394 - 14.496: 97.7538% ( 5) 00:36:50.349 14.496 - 14.599: 97.7886% ( 2) 00:36:50.349 14.599 - 14.702: 97.8234% ( 2) 00:36:50.349 14.702 - 14.805: 97.8583% ( 2) 00:36:50.349 14.805 - 14.908: 97.9453% ( 5) 00:36:50.349 14.908 - 15.010: 98.0498% ( 6) 00:36:50.349 15.010 - 15.113: 98.1194% ( 4) 00:36:50.349 15.113 - 15.216: 98.2762% ( 9) 00:36:50.349 15.216 - 15.319: 98.4329% ( 9) 00:36:50.349 15.319 - 15.422: 98.5722% ( 8) 00:36:50.349 15.422 - 15.524: 98.6941% ( 7) 00:36:50.349 15.524 - 15.627: 98.7115% ( 1) 00:36:50.349 15.627 - 15.730: 98.8508% ( 8) 00:36:50.349 15.730 - 15.833: 98.9552% ( 6) 00:36:50.349 15.833 - 15.936: 99.0249% ( 4) 00:36:50.349 15.936 - 16.039: 99.1120% ( 5) 00:36:50.349 16.039 - 16.141: 99.1816% ( 4) 00:36:50.349 16.141 - 16.244: 99.2164% ( 2) 00:36:50.349 16.244 - 16.347: 99.2338% ( 1) 00:36:50.349 16.450 - 16.553: 99.2513% ( 1) 00:36:50.349 16.553 - 16.655: 99.2687% ( 1) 00:36:50.349 16.758 - 16.861: 99.3209% ( 3) 00:36:50.349 18.198 - 18.300: 99.3383% ( 1) 00:36:50.349 19.123 - 19.226: 99.3557% ( 1) 00:36:50.349 19.226 - 19.329: 99.3731% ( 1) 00:36:50.349 19.329 - 19.431: 99.4254% ( 3) 00:36:50.349 19.431 - 19.534: 99.4776% ( 3) 00:36:50.349 19.534 - 19.637: 99.5124% ( 2) 00:36:50.349 19.637 - 19.740: 99.5473% ( 2) 00:36:50.349 19.740 - 19.843: 99.5821% ( 2) 00:36:50.349 19.843 - 19.945: 99.5995% ( 1) 00:36:50.349 19.945 - 20.048: 99.6169% ( 1) 00:36:50.349 20.048 - 20.151: 99.6343% ( 1) 00:36:50.349 21.179 - 21.282: 99.6517% ( 1) 00:36:50.349 21.385 - 21.488: 99.6692% ( 1) 00:36:50.349 23.544 - 23.647: 99.6866% ( 1) 00:36:50.349 23.852 - 23.955: 99.7040% ( 1) 00:36:50.349 23.955 - 24.058: 99.7214% ( 1) 00:36:50.349 24.366 - 24.469: 99.7562% ( 2) 00:36:50.349 24.469 - 24.572: 99.7736% ( 1) 00:36:50.349 24.778 - 24.880: 99.7910% ( 1) 00:36:50.349 24.880 - 24.983: 99.8085% ( 1) 00:36:50.349 25.292 - 25.394: 99.8259% ( 1) 00:36:50.349 25.600 - 25.703: 99.8433% ( 1) 00:36:50.349 27.142 - 27.348: 99.8607% ( 1) 00:36:50.349 28.376 - 28.582: 99.8781% ( 1) 00:36:50.349 30.843 - 31.049: 99.8955% ( 1) 00:36:50.349 34.545 - 34.750: 99.9129% ( 1) 00:36:50.349 38.657 - 38.863: 99.9303% ( 1) 00:36:50.349 45.237 - 45.443: 99.9478% ( 1) 00:36:50.349 55.518 - 55.929: 99.9652% ( 1) 00:36:50.349 57.986 - 58.397: 99.9826% ( 1) 00:36:50.349 108.569 - 109.391: 100.0000% ( 1) 00:36:50.349 00:36:50.349 00:36:50.349 real 0m1.309s 00:36:50.349 user 0m1.097s 00:36:50.349 sys 0m0.161s 00:36:50.349 17:33:50 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.349 17:33:50 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:36:50.349 ************************************ 00:36:50.349 END TEST nvme_overhead 00:36:50.349 ************************************ 00:36:50.349 17:33:50 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:36:50.349 17:33:50 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:36:50.349 17:33:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.349 17:33:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:50.349 ************************************ 00:36:50.349 START TEST nvme_arbitration 00:36:50.349 ************************************ 00:36:50.349 17:33:50 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:36:53.642 Initializing NVMe Controllers 00:36:53.642 Attached to 0000:00:10.0 00:36:53.642 Attached to 0000:00:11.0 00:36:53.642 Attached to 0000:00:13.0 00:36:53.642 Attached to 0000:00:12.0 00:36:53.642 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:36:53.642 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:36:53.642 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:36:53.642 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:36:53.642 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:36:53.642 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:36:53.642 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:36:53.642 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:36:53.642 Initialization complete. Launching workers. 00:36:53.642 Starting thread on core 1 with urgent priority queue 00:36:53.642 Starting thread on core 2 with urgent priority queue 00:36:53.643 Starting thread on core 3 with urgent priority queue 00:36:53.643 Starting thread on core 0 with urgent priority queue 00:36:53.643 QEMU NVMe Ctrl (12340 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:36:53.643 QEMU NVMe Ctrl (12342 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:36:53.643 QEMU NVMe Ctrl (12341 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:36:53.643 QEMU NVMe Ctrl (12342 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:36:53.643 QEMU NVMe Ctrl (12343 ) core 2: 426.67 IO/s 234.38 secs/100000 ios 00:36:53.643 QEMU NVMe Ctrl (12342 ) core 3: 384.00 IO/s 260.42 secs/100000 ios 00:36:53.643 ======================================================== 00:36:53.643 00:36:53.643 00:36:53.643 real 0m3.475s 00:36:53.643 user 0m9.409s 00:36:53.643 sys 0m0.208s 00:36:53.643 17:33:54 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.643 17:33:54 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:36:53.643 ************************************ 00:36:53.643 END TEST nvme_arbitration 00:36:53.643 ************************************ 00:36:53.903 17:33:54 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:36:53.903 17:33:54 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:53.903 17:33:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.903 17:33:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:53.903 ************************************ 00:36:53.903 START TEST nvme_single_aen 00:36:53.903 ************************************ 00:36:53.903 17:33:54 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:36:54.163 Asynchronous Event Request test 00:36:54.163 Attached to 0000:00:10.0 00:36:54.163 Attached to 0000:00:11.0 00:36:54.163 Attached to 0000:00:13.0 00:36:54.163 Attached to 0000:00:12.0 00:36:54.163 Reset controller to setup AER completions for this process 00:36:54.163 Registering asynchronous event callbacks... 00:36:54.163 Getting orig temperature thresholds of all controllers 00:36:54.163 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:54.163 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:54.163 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:54.163 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:36:54.163 Setting all controllers temperature threshold low to trigger AER 00:36:54.163 Waiting for all controllers temperature threshold to be set lower 00:36:54.163 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:54.163 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:36:54.163 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:54.163 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:36:54.163 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:54.163 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:36:54.163 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:36:54.163 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:36:54.163 Waiting for all controllers to trigger AER and reset threshold 00:36:54.163 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:54.163 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:54.163 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:54.163 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:36:54.163 Cleaning up... 00:36:54.163 00:36:54.163 real 0m0.311s 00:36:54.163 user 0m0.100s 00:36:54.163 sys 0m0.167s 00:36:54.163 17:33:54 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:54.163 17:33:54 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:36:54.163 ************************************ 00:36:54.163 END TEST nvme_single_aen 00:36:54.163 ************************************ 00:36:54.163 17:33:54 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:36:54.163 17:33:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:54.163 17:33:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.163 17:33:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:36:54.163 ************************************ 00:36:54.163 START TEST nvme_doorbell_aers 00:36:54.163 ************************************ 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:54.163 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:54.422 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:36:54.422 17:33:54 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:36:54.422 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:36:54.422 17:33:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:36:54.681 [2024-11-26 17:33:55.221421] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:04.695 Executing: test_write_invalid_db 00:37:04.695 Waiting for AER completion... 00:37:04.695 Failure: test_write_invalid_db 00:37:04.695 00:37:04.695 Executing: test_invalid_db_write_overflow_sq 00:37:04.695 Waiting for AER completion... 00:37:04.695 Failure: test_invalid_db_write_overflow_sq 00:37:04.695 00:37:04.696 Executing: test_invalid_db_write_overflow_cq 00:37:04.696 Waiting for AER completion... 00:37:04.696 Failure: test_invalid_db_write_overflow_cq 00:37:04.696 00:37:04.696 17:34:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:37:04.696 17:34:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:37:04.696 [2024-11-26 17:34:05.237978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:14.674 Executing: test_write_invalid_db 00:37:14.674 Waiting for AER completion... 00:37:14.674 Failure: test_write_invalid_db 00:37:14.674 00:37:14.674 Executing: test_invalid_db_write_overflow_sq 00:37:14.674 Waiting for AER completion... 00:37:14.674 Failure: test_invalid_db_write_overflow_sq 00:37:14.674 00:37:14.674 Executing: test_invalid_db_write_overflow_cq 00:37:14.674 Waiting for AER completion... 00:37:14.674 Failure: test_invalid_db_write_overflow_cq 00:37:14.674 00:37:14.674 17:34:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:37:14.674 17:34:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:37:14.674 [2024-11-26 17:34:15.333543] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:24.660 Executing: test_write_invalid_db 00:37:24.660 Waiting for AER completion... 00:37:24.660 Failure: test_write_invalid_db 00:37:24.660 00:37:24.660 Executing: test_invalid_db_write_overflow_sq 00:37:24.660 Waiting for AER completion... 00:37:24.660 Failure: test_invalid_db_write_overflow_sq 00:37:24.660 00:37:24.660 Executing: test_invalid_db_write_overflow_cq 00:37:24.660 Waiting for AER completion... 00:37:24.660 Failure: test_invalid_db_write_overflow_cq 00:37:24.660 00:37:24.660 17:34:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:37:24.661 17:34:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:37:24.920 [2024-11-26 17:34:25.386802] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.903 Executing: test_write_invalid_db 00:37:34.903 Waiting for AER completion... 00:37:34.903 Failure: test_write_invalid_db 00:37:34.903 00:37:34.903 Executing: test_invalid_db_write_overflow_sq 00:37:34.903 Waiting for AER completion... 00:37:34.903 Failure: test_invalid_db_write_overflow_sq 00:37:34.903 00:37:34.903 Executing: test_invalid_db_write_overflow_cq 00:37:34.903 Waiting for AER completion... 00:37:34.903 Failure: test_invalid_db_write_overflow_cq 00:37:34.904 00:37:34.904 ************************************ 00:37:34.904 END TEST nvme_doorbell_aers 00:37:34.904 ************************************ 00:37:34.904 00:37:34.904 real 0m40.335s 00:37:34.904 user 0m28.208s 00:37:34.904 sys 0m11.731s 00:37:34.904 17:34:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.904 17:34:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:37:34.904 17:34:35 nvme -- nvme/nvme.sh@97 -- # uname 00:37:34.904 17:34:35 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:37:34.904 17:34:35 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:37:34.904 17:34:35 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:37:34.904 17:34:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:34.904 17:34:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:37:34.904 ************************************ 00:37:34.904 START TEST nvme_multi_aen 00:37:34.904 ************************************ 00:37:34.904 17:34:35 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:37:34.904 [2024-11-26 17:34:35.493331] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.493439] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.493457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.495401] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.495457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.495472] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.496998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.497037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.497052] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.498556] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.498596] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 [2024-11-26 17:34:35.498611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64551) is not found. Dropping the request. 00:37:34.904 Child process pid: 65068 00:37:35.163 [Child] Asynchronous Event Request test 00:37:35.163 [Child] Attached to 0000:00:10.0 00:37:35.163 [Child] Attached to 0000:00:11.0 00:37:35.163 [Child] Attached to 0000:00:13.0 00:37:35.163 [Child] Attached to 0000:00:12.0 00:37:35.163 [Child] Registering asynchronous event callbacks... 00:37:35.163 [Child] Getting orig temperature thresholds of all controllers 00:37:35.163 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.163 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.163 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.163 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.163 [Child] Waiting for all controllers to trigger AER and reset threshold 00:37:35.163 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.163 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.163 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.163 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.163 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.163 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.163 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.163 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.163 [Child] Cleaning up... 00:37:35.423 Asynchronous Event Request test 00:37:35.423 Attached to 0000:00:10.0 00:37:35.423 Attached to 0000:00:11.0 00:37:35.423 Attached to 0000:00:13.0 00:37:35.423 Attached to 0000:00:12.0 00:37:35.423 Reset controller to setup AER completions for this process 00:37:35.423 Registering asynchronous event callbacks... 00:37:35.423 Getting orig temperature thresholds of all controllers 00:37:35.423 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.423 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.423 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.423 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:37:35.423 Setting all controllers temperature threshold low to trigger AER 00:37:35.423 Waiting for all controllers temperature threshold to be set lower 00:37:35.423 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.423 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:37:35.423 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.423 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:37:35.423 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.423 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:37:35.423 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:37:35.423 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:37:35.423 Waiting for all controllers to trigger AER and reset threshold 00:37:35.423 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.423 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.423 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.423 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:37:35.423 Cleaning up... 00:37:35.423 00:37:35.423 real 0m0.674s 00:37:35.423 user 0m0.218s 00:37:35.423 sys 0m0.342s 00:37:35.423 17:34:35 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.423 17:34:35 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:37:35.423 ************************************ 00:37:35.423 END TEST nvme_multi_aen 00:37:35.423 ************************************ 00:37:35.423 17:34:35 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:37:35.423 17:34:35 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:35.423 17:34:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.423 17:34:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:37:35.423 ************************************ 00:37:35.423 START TEST nvme_startup 00:37:35.423 ************************************ 00:37:35.423 17:34:35 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:37:35.682 Initializing NVMe Controllers 00:37:35.682 Attached to 0000:00:10.0 00:37:35.683 Attached to 0000:00:11.0 00:37:35.683 Attached to 0000:00:13.0 00:37:35.683 Attached to 0000:00:12.0 00:37:35.683 Initialization complete. 00:37:35.683 Time used:201940.734 (us). 00:37:35.683 ************************************ 00:37:35.683 END TEST nvme_startup 00:37:35.683 ************************************ 00:37:35.683 00:37:35.683 real 0m0.308s 00:37:35.683 user 0m0.111s 00:37:35.683 sys 0m0.150s 00:37:35.683 17:34:36 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.683 17:34:36 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:37:35.683 17:34:36 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:37:35.683 17:34:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:35.683 17:34:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.683 17:34:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:37:35.683 ************************************ 00:37:35.683 START TEST nvme_multi_secondary 00:37:35.683 ************************************ 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65124 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65125 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:37:35.683 17:34:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:37:39.874 Initializing NVMe Controllers 00:37:39.874 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:39.874 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:37:39.874 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:37:39.874 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:37:39.874 Initialization complete. Launching workers. 00:37:39.874 ======================================================== 00:37:39.874 Latency(us) 00:37:39.874 Device Information : IOPS MiB/s Average min max 00:37:39.874 PCIE (0000:00:10.0) NSID 1 from core 2: 3157.97 12.34 5064.49 1052.09 11851.48 00:37:39.874 PCIE (0000:00:11.0) NSID 1 from core 2: 3157.97 12.34 5065.73 1100.15 11228.16 00:37:39.874 PCIE (0000:00:13.0) NSID 1 from core 2: 3157.97 12.34 5066.26 1142.28 11344.04 00:37:39.874 PCIE (0000:00:12.0) NSID 1 from core 2: 3157.97 12.34 5066.32 1093.05 12436.70 00:37:39.874 PCIE (0000:00:12.0) NSID 2 from core 2: 3157.97 12.34 5066.38 1089.32 12434.79 00:37:39.874 PCIE (0000:00:12.0) NSID 3 from core 2: 3157.97 12.34 5066.60 1077.18 11517.40 00:37:39.874 ======================================================== 00:37:39.874 Total : 18947.85 74.02 5065.96 1052.09 12436.70 00:37:39.874 00:37:39.874 17:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65124 00:37:39.874 Initializing NVMe Controllers 00:37:39.874 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:39.874 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:39.874 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:37:39.874 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:37:39.874 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:37:39.874 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:37:39.874 Initialization complete. Launching workers. 00:37:39.874 ======================================================== 00:37:39.874 Latency(us) 00:37:39.874 Device Information : IOPS MiB/s Average min max 00:37:39.874 PCIE (0000:00:10.0) NSID 1 from core 1: 5016.93 19.60 3186.84 1624.31 6194.00 00:37:39.874 PCIE (0000:00:11.0) NSID 1 from core 1: 5016.93 19.60 3188.56 1539.13 6208.53 00:37:39.874 PCIE (0000:00:13.0) NSID 1 from core 1: 5016.93 19.60 3188.65 1656.00 5586.90 00:37:39.874 PCIE (0000:00:12.0) NSID 1 from core 1: 5016.93 19.60 3188.61 1661.30 6234.96 00:37:39.874 PCIE (0000:00:12.0) NSID 2 from core 1: 5016.93 19.60 3188.58 1500.47 6361.65 00:37:39.874 PCIE (0000:00:12.0) NSID 3 from core 1: 5016.93 19.60 3188.71 1643.44 6154.88 00:37:39.874 ======================================================== 00:37:39.874 Total : 30101.56 117.58 3188.32 1500.47 6361.65 00:37:39.874 00:37:41.253 Initializing NVMe Controllers 00:37:41.253 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:41.253 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:41.253 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:41.253 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:41.253 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:37:41.253 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:37:41.253 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:37:41.253 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:37:41.253 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:37:41.253 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:37:41.253 Initialization complete. Launching workers. 00:37:41.253 ======================================================== 00:37:41.253 Latency(us) 00:37:41.253 Device Information : IOPS MiB/s Average min max 00:37:41.253 PCIE (0000:00:10.0) NSID 1 from core 0: 7894.14 30.84 2025.29 918.38 9602.61 00:37:41.253 PCIE (0000:00:11.0) NSID 1 from core 0: 7894.14 30.84 2026.34 950.36 9811.80 00:37:41.253 PCIE (0000:00:13.0) NSID 1 from core 0: 7894.14 30.84 2026.28 874.46 9463.74 00:37:41.253 PCIE (0000:00:12.0) NSID 1 from core 0: 7894.14 30.84 2026.24 839.73 9165.73 00:37:41.253 PCIE (0000:00:12.0) NSID 2 from core 0: 7894.14 30.84 2026.20 785.53 9486.81 00:37:41.253 PCIE (0000:00:12.0) NSID 3 from core 0: 7897.34 30.85 2025.35 742.08 9482.10 00:37:41.253 ======================================================== 00:37:41.253 Total : 47368.03 185.03 2025.95 742.08 9811.80 00:37:41.253 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65125 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65194 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65195 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:37:41.253 17:34:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:37:44.542 Initializing NVMe Controllers 00:37:44.542 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:44.542 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:44.542 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:44.542 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:44.542 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:37:44.542 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:37:44.542 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:37:44.542 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:37:44.542 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:37:44.542 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:37:44.542 Initialization complete. Launching workers. 00:37:44.542 ======================================================== 00:37:44.542 Latency(us) 00:37:44.542 Device Information : IOPS MiB/s Average min max 00:37:44.542 PCIE (0000:00:10.0) NSID 1 from core 0: 5308.70 20.74 3011.74 947.54 10547.29 00:37:44.542 PCIE (0000:00:11.0) NSID 1 from core 0: 5308.70 20.74 3013.43 965.05 10702.42 00:37:44.543 PCIE (0000:00:13.0) NSID 1 from core 0: 5308.70 20.74 3013.48 962.70 12042.14 00:37:44.543 PCIE (0000:00:12.0) NSID 1 from core 0: 5308.70 20.74 3013.54 979.58 10173.69 00:37:44.543 PCIE (0000:00:12.0) NSID 2 from core 0: 5308.70 20.74 3013.77 965.97 10013.39 00:37:44.543 PCIE (0000:00:12.0) NSID 3 from core 0: 5314.03 20.76 3010.79 968.69 10258.72 00:37:44.543 ======================================================== 00:37:44.543 Total : 31857.55 124.44 3012.79 947.54 12042.14 00:37:44.543 00:37:44.543 Initializing NVMe Controllers 00:37:44.543 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:44.543 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:44.543 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:44.543 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:44.543 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:37:44.543 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:37:44.543 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:37:44.543 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:37:44.543 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:37:44.543 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:37:44.543 Initialization complete. Launching workers. 00:37:44.543 ======================================================== 00:37:44.543 Latency(us) 00:37:44.543 Device Information : IOPS MiB/s Average min max 00:37:44.543 PCIE (0000:00:10.0) NSID 1 from core 1: 5077.10 19.83 3149.03 1006.94 9804.13 00:37:44.543 PCIE (0000:00:11.0) NSID 1 from core 1: 5077.10 19.83 3150.68 1024.03 10247.16 00:37:44.543 PCIE (0000:00:13.0) NSID 1 from core 1: 5077.10 19.83 3150.65 998.71 8974.99 00:37:44.543 PCIE (0000:00:12.0) NSID 1 from core 1: 5077.10 19.83 3150.61 1009.04 8721.68 00:37:44.543 PCIE (0000:00:12.0) NSID 2 from core 1: 5077.10 19.83 3150.58 1009.05 9006.29 00:37:44.543 PCIE (0000:00:12.0) NSID 3 from core 1: 5077.10 19.83 3150.54 987.98 9743.06 00:37:44.543 ======================================================== 00:37:44.543 Total : 30462.58 118.99 3150.35 987.98 10247.16 00:37:44.543 00:37:47.078 Initializing NVMe Controllers 00:37:47.078 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:37:47.078 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:37:47.078 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:37:47.078 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:37:47.078 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:37:47.078 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:37:47.078 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:37:47.078 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:37:47.078 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:37:47.078 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:37:47.078 Initialization complete. Launching workers. 00:37:47.078 ======================================================== 00:37:47.078 Latency(us) 00:37:47.078 Device Information : IOPS MiB/s Average min max 00:37:47.078 PCIE (0000:00:10.0) NSID 1 from core 2: 3099.32 12.11 5160.54 1166.95 14869.76 00:37:47.078 PCIE (0000:00:11.0) NSID 1 from core 2: 3099.32 12.11 5162.20 1178.70 15600.05 00:37:47.078 PCIE (0000:00:13.0) NSID 1 from core 2: 3099.32 12.11 5161.85 1147.45 13882.10 00:37:47.078 PCIE (0000:00:12.0) NSID 1 from core 2: 3099.32 12.11 5162.01 1142.25 14986.81 00:37:47.078 PCIE (0000:00:12.0) NSID 2 from core 2: 3099.32 12.11 5161.91 1224.16 14041.53 00:37:47.078 PCIE (0000:00:12.0) NSID 3 from core 2: 3099.32 12.11 5161.81 1136.48 14743.73 00:37:47.078 ======================================================== 00:37:47.078 Total : 18595.94 72.64 5161.72 1136.48 15600.05 00:37:47.078 00:37:47.078 ************************************ 00:37:47.078 END TEST nvme_multi_secondary 00:37:47.078 ************************************ 00:37:47.078 17:34:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65194 00:37:47.078 17:34:47 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65195 00:37:47.078 00:37:47.078 real 0m11.018s 00:37:47.078 user 0m18.604s 00:37:47.078 sys 0m1.091s 00:37:47.078 17:34:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.078 17:34:47 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:37:47.078 17:34:47 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:37:47.078 17:34:47 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:37:47.078 17:34:47 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64131 ]] 00:37:47.078 17:34:47 nvme -- common/autotest_common.sh@1094 -- # kill 64131 00:37:47.078 17:34:47 nvme -- common/autotest_common.sh@1095 -- # wait 64131 00:37:47.078 [2024-11-26 17:34:47.427365] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.078 [2024-11-26 17:34:47.429126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.078 [2024-11-26 17:34:47.429233] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.078 [2024-11-26 17:34:47.429287] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.078 [2024-11-26 17:34:47.435167] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.435238] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.435268] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.435300] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.439693] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.439971] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.440009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.440039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.444470] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.444553] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.444582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 [2024-11-26 17:34:47.444612] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65067) is not found. Dropping the request. 00:37:47.079 17:34:47 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:37:47.079 17:34:47 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:37:47.079 17:34:47 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:37:47.079 17:34:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:47.079 17:34:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.079 17:34:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:37:47.079 ************************************ 00:37:47.079 START TEST bdev_nvme_reset_stuck_adm_cmd 00:37:47.079 ************************************ 00:37:47.079 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:37:47.338 * Looking for test storage... 00:37:47.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.338 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:47.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.339 --rc genhtml_branch_coverage=1 00:37:47.339 --rc genhtml_function_coverage=1 00:37:47.339 --rc genhtml_legend=1 00:37:47.339 --rc geninfo_all_blocks=1 00:37:47.339 --rc geninfo_unexecuted_blocks=1 00:37:47.339 00:37:47.339 ' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:47.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.339 --rc genhtml_branch_coverage=1 00:37:47.339 --rc genhtml_function_coverage=1 00:37:47.339 --rc genhtml_legend=1 00:37:47.339 --rc geninfo_all_blocks=1 00:37:47.339 --rc geninfo_unexecuted_blocks=1 00:37:47.339 00:37:47.339 ' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:47.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.339 --rc genhtml_branch_coverage=1 00:37:47.339 --rc genhtml_function_coverage=1 00:37:47.339 --rc genhtml_legend=1 00:37:47.339 --rc geninfo_all_blocks=1 00:37:47.339 --rc geninfo_unexecuted_blocks=1 00:37:47.339 00:37:47.339 ' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:47.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.339 --rc genhtml_branch_coverage=1 00:37:47.339 --rc genhtml_function_coverage=1 00:37:47.339 --rc genhtml_legend=1 00:37:47.339 --rc geninfo_all_blocks=1 00:37:47.339 --rc geninfo_unexecuted_blocks=1 00:37:47.339 00:37:47.339 ' 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:47.339 17:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65361 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65361 00:37:47.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65361 ']' 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:47.339 17:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:47.599 [2024-11-26 17:34:48.139083] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:37:47.599 [2024-11-26 17:34:48.139225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65361 ] 00:37:47.857 [2024-11-26 17:34:48.347904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:47.857 [2024-11-26 17:34:48.493382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.857 [2024-11-26 17:34:48.493540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.857 [2024-11-26 17:34:48.493677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:47.858 [2024-11-26 17:34:48.493682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:49.301 nvme0n1 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_125SK.txt 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:37:49.301 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:49.302 true 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732642489 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65395 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:37:49.302 17:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:51.209 [2024-11-26 17:34:51.703484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:37:51.209 [2024-11-26 17:34:51.704028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:51.209 [2024-11-26 17:34:51.704161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:37:51.209 [2024-11-26 17:34:51.704301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.209 [2024-11-26 17:34:51.706544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65395 00:37:51.209 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65395 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65395 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_125SK.txt 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_125SK.txt 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65361 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65361 ']' 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65361 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.209 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65361 00:37:51.209 killing process with pid 65361 00:37:51.210 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:51.210 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:51.210 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65361' 00:37:51.210 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65361 00:37:51.210 17:34:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65361 00:37:54.518 17:34:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:37:54.518 17:34:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:37:54.518 ************************************ 00:37:54.518 END TEST bdev_nvme_reset_stuck_adm_cmd 00:37:54.518 ************************************ 00:37:54.518 00:37:54.518 real 0m6.981s 00:37:54.518 user 0m24.057s 00:37:54.518 sys 0m1.027s 00:37:54.518 17:34:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:54.518 17:34:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:37:54.518 17:34:54 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:37:54.518 17:34:54 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:37:54.518 17:34:54 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:54.518 17:34:54 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:54.518 17:34:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:37:54.518 ************************************ 00:37:54.518 START TEST nvme_fio 00:37:54.518 ************************************ 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:37:54.518 17:34:54 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:37:54.518 17:34:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:54.518 17:34:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:54.518 17:34:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:37:54.777 17:34:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:37:54.777 17:34:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:54.777 17:34:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:37:55.037 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:55.037 fio-3.35 00:37:55.037 Starting 1 thread 00:37:59.231 00:37:59.231 test: (groupid=0, jobs=1): err= 0: pid=65546: Tue Nov 26 17:34:59 2024 00:37:59.231 read: IOPS=22.0k, BW=85.9MiB/s (90.0MB/s)(172MiB/2001msec) 00:37:59.231 slat (usec): min=4, max=100, avg= 4.88, stdev= 1.12 00:37:59.231 clat (usec): min=208, max=11241, avg=2902.89, stdev=308.70 00:37:59.231 lat (usec): min=213, max=11342, avg=2907.77, stdev=309.13 00:37:59.231 clat percentiles (usec): 00:37:59.231 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:37:59.231 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:37:59.231 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:37:59.231 | 99.00th=[ 3916], 99.50th=[ 4686], 99.90th=[ 6128], 99.95th=[ 8586], 00:37:59.231 | 99.99th=[10945] 00:37:59.231 bw ( KiB/s): min=83952, max=89104, per=99.23%, avg=87253.33, stdev=2866.03, samples=3 00:37:59.231 iops : min=20988, max=22276, avg=21813.33, stdev=716.51, samples=3 00:37:59.231 write: IOPS=21.8k, BW=85.3MiB/s (89.5MB/s)(171MiB/2001msec); 0 zone resets 00:37:59.231 slat (nsec): min=4379, max=35151, avg=5260.71, stdev=972.97 00:37:59.231 clat (usec): min=171, max=10952, avg=2910.20, stdev=313.97 00:37:59.231 lat (usec): min=176, max=10966, avg=2915.46, stdev=314.35 00:37:59.231 clat percentiles (usec): 00:37:59.231 | 1.00th=[ 2474], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2802], 00:37:59.231 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:37:59.231 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3163], 00:37:59.231 | 99.00th=[ 3916], 99.50th=[ 4686], 99.90th=[ 6783], 99.95th=[ 8717], 00:37:59.231 | 99.99th=[10683] 00:37:59.231 bw ( KiB/s): min=83896, max=89904, per=100.00%, avg=87418.67, stdev=3135.45, samples=3 00:37:59.231 iops : min=20974, max=22476, avg=21854.67, stdev=783.86, samples=3 00:37:59.231 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:37:59.231 lat (msec) : 2=0.38%, 4=98.62%, 10=0.93%, 20=0.03% 00:37:59.231 cpu : usr=99.35%, sys=0.15%, ctx=5, majf=0, minf=606 00:37:59.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:37:59.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:59.231 issued rwts: total=43989,43711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:59.231 00:37:59.231 Run status group 0 (all jobs): 00:37:59.231 READ: bw=85.9MiB/s (90.0MB/s), 85.9MiB/s-85.9MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:37:59.231 WRITE: bw=85.3MiB/s (89.5MB/s), 85.3MiB/s-85.3MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:37:59.231 ----------------------------------------------------- 00:37:59.231 Suppressions used: 00:37:59.231 count bytes template 00:37:59.231 1 32 /usr/src/fio/parse.c 00:37:59.231 1 8 libtcmalloc_minimal.so 00:37:59.231 ----------------------------------------------------- 00:37:59.231 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:37:59.231 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:37:59.491 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:37:59.491 17:34:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:37:59.491 17:34:59 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:37:59.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:37:59.491 fio-3.35 00:37:59.491 Starting 1 thread 00:38:03.684 00:38:03.684 test: (groupid=0, jobs=1): err= 0: pid=65612: Tue Nov 26 17:35:03 2024 00:38:03.685 read: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec) 00:38:03.685 slat (nsec): min=4242, max=51953, avg=4905.78, stdev=987.36 00:38:03.685 clat (usec): min=987, max=10835, avg=2899.37, stdev=312.62 00:38:03.685 lat (usec): min=992, max=10887, avg=2904.28, stdev=312.91 00:38:03.685 clat percentiles (usec): 00:38:03.685 | 1.00th=[ 2212], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:38:03.685 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:38:03.685 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:03.685 | 99.00th=[ 3949], 99.50th=[ 4752], 99.90th=[ 6194], 99.95th=[ 8356], 00:38:03.685 | 99.99th=[10683] 00:38:03.685 bw ( KiB/s): min=86080, max=89160, per=99.63%, avg=87818.67, stdev=1577.98, samples=3 00:38:03.685 iops : min=21520, max=22290, avg=21954.67, stdev=394.49, samples=3 00:38:03.685 write: IOPS=21.9k, BW=85.5MiB/s (89.7MB/s)(171MiB/2001msec); 0 zone resets 00:38:03.685 slat (nsec): min=4358, max=40059, avg=5284.92, stdev=1015.51 00:38:03.685 clat (usec): min=944, max=10758, avg=2902.78, stdev=322.88 00:38:03.685 lat (usec): min=949, max=10770, avg=2908.06, stdev=323.18 00:38:03.685 clat percentiles (usec): 00:38:03.685 | 1.00th=[ 2180], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:38:03.685 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:38:03.685 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:03.685 | 99.00th=[ 3982], 99.50th=[ 4752], 99.90th=[ 6587], 99.95th=[ 8717], 00:38:03.685 | 99.99th=[10552] 00:38:03.685 bw ( KiB/s): min=85728, max=90168, per=100.00%, avg=88037.33, stdev=2225.39, samples=3 00:38:03.685 iops : min=21432, max=22542, avg=22009.33, stdev=556.35, samples=3 00:38:03.685 lat (usec) : 1000=0.01% 00:38:03.685 lat (msec) : 2=0.70%, 4=98.33%, 10=0.94%, 20=0.02% 00:38:03.685 cpu : usr=99.35%, sys=0.15%, ctx=2, majf=0, minf=606 00:38:03.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:38:03.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:03.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:03.685 issued rwts: total=44092,43813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:03.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:03.685 00:38:03.685 Run status group 0 (all jobs): 00:38:03.685 READ: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:38:03.685 WRITE: bw=85.5MiB/s (89.7MB/s), 85.5MiB/s-85.5MiB/s (89.7MB/s-89.7MB/s), io=171MiB (179MB), run=2001-2001msec 00:38:03.685 ----------------------------------------------------- 00:38:03.685 Suppressions used: 00:38:03.685 count bytes template 00:38:03.685 1 32 /usr/src/fio/parse.c 00:38:03.685 1 8 libtcmalloc_minimal.so 00:38:03.685 ----------------------------------------------------- 00:38:03.685 00:38:03.685 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:38:03.685 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:38:03.685 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:38:03.685 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:38:03.944 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:38:03.944 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:38:04.204 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:38:04.204 17:35:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:38:04.204 17:35:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:38:04.204 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:04.204 fio-3.35 00:38:04.204 Starting 1 thread 00:38:08.395 00:38:08.395 test: (groupid=0, jobs=1): err= 0: pid=65679: Tue Nov 26 17:35:08 2024 00:38:08.395 read: IOPS=21.8k, BW=85.3MiB/s (89.5MB/s)(171MiB/2001msec) 00:38:08.395 slat (nsec): min=4247, max=58425, avg=4921.40, stdev=1138.39 00:38:08.395 clat (usec): min=221, max=12558, avg=2925.48, stdev=496.56 00:38:08.395 lat (usec): min=226, max=12616, avg=2930.40, stdev=497.17 00:38:08.395 clat percentiles (usec): 00:38:08.395 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:38:08.395 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:38:08.395 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:08.395 | 99.00th=[ 4883], 99.50th=[ 6259], 99.90th=[ 9503], 99.95th=[10290], 00:38:08.395 | 99.99th=[12387] 00:38:08.395 bw ( KiB/s): min=83208, max=90416, per=98.75%, avg=86280.00, stdev=3719.93, samples=3 00:38:08.395 iops : min=20802, max=22604, avg=21570.00, stdev=929.98, samples=3 00:38:08.395 write: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(170MiB/2001msec); 0 zone resets 00:38:08.395 slat (nsec): min=4358, max=47174, avg=5322.32, stdev=1206.17 00:38:08.395 clat (usec): min=196, max=12463, avg=2928.25, stdev=493.06 00:38:08.395 lat (usec): min=201, max=12476, avg=2933.57, stdev=493.65 00:38:08.395 clat percentiles (usec): 00:38:08.395 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:38:08.395 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:38:08.395 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:08.395 | 99.00th=[ 4817], 99.50th=[ 6194], 99.90th=[ 9634], 99.95th=[10552], 00:38:08.395 | 99.99th=[12125] 00:38:08.395 bw ( KiB/s): min=83400, max=90760, per=99.59%, avg=86386.67, stdev=3870.99, samples=3 00:38:08.395 iops : min=20850, max=22690, avg=21596.67, stdev=967.75, samples=3 00:38:08.395 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:38:08.395 lat (msec) : 2=0.22%, 4=98.08%, 10=1.59%, 20=0.06% 00:38:08.395 cpu : usr=99.30%, sys=0.10%, ctx=2, majf=0, minf=606 00:38:08.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:38:08.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:08.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:08.395 issued rwts: total=43706,43393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:08.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:08.395 00:38:08.395 Run status group 0 (all jobs): 00:38:08.395 READ: bw=85.3MiB/s (89.5MB/s), 85.3MiB/s-85.3MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:38:08.395 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=170MiB (178MB), run=2001-2001msec 00:38:08.395 ----------------------------------------------------- 00:38:08.395 Suppressions used: 00:38:08.395 count bytes template 00:38:08.395 1 32 /usr/src/fio/parse.c 00:38:08.395 1 8 libtcmalloc_minimal.so 00:38:08.395 ----------------------------------------------------- 00:38:08.395 00:38:08.395 17:35:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:38:08.395 17:35:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:38:08.395 17:35:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:38:08.395 17:35:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:38:08.655 17:35:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:38:08.655 17:35:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:38:08.915 17:35:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:38:08.915 17:35:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:38:08.915 17:35:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:38:08.915 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:38:08.915 fio-3.35 00:38:08.915 Starting 1 thread 00:38:14.187 00:38:14.187 test: (groupid=0, jobs=1): err= 0: pid=65740: Tue Nov 26 17:35:14 2024 00:38:14.187 read: IOPS=22.3k, BW=87.0MiB/s (91.2MB/s)(174MiB/2001msec) 00:38:14.187 slat (nsec): min=4233, max=55093, avg=4848.99, stdev=1069.85 00:38:14.187 clat (usec): min=222, max=10934, avg=2866.73, stdev=332.16 00:38:14.187 lat (usec): min=233, max=10984, avg=2871.58, stdev=332.48 00:38:14.187 clat percentiles (usec): 00:38:14.187 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:38:14.187 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:38:14.187 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:14.187 | 99.00th=[ 4146], 99.50th=[ 4752], 99.90th=[ 5932], 99.95th=[ 8586], 00:38:14.187 | 99.99th=[10683] 00:38:14.187 bw ( KiB/s): min=85648, max=90264, per=99.13%, avg=88301.33, stdev=2384.25, samples=3 00:38:14.187 iops : min=21412, max=22566, avg=22075.33, stdev=596.06, samples=3 00:38:14.187 write: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec); 0 zone resets 00:38:14.187 slat (nsec): min=4366, max=93419, avg=5201.48, stdev=1075.18 00:38:14.187 clat (usec): min=204, max=10773, avg=2873.86, stdev=343.51 00:38:14.187 lat (usec): min=209, max=10785, avg=2879.06, stdev=343.80 00:38:14.187 clat percentiles (usec): 00:38:14.187 | 1.00th=[ 2147], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:38:14.187 | 30.00th=[ 2802], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:38:14.187 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:14.187 | 99.00th=[ 4293], 99.50th=[ 4817], 99.90th=[ 6652], 99.95th=[ 8717], 00:38:14.187 | 99.99th=[10290] 00:38:14.187 bw ( KiB/s): min=85480, max=89952, per=99.98%, avg=88458.67, stdev=2579.60, samples=3 00:38:14.187 iops : min=21370, max=22488, avg=22114.67, stdev=644.90, samples=3 00:38:14.187 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:38:14.187 lat (msec) : 2=0.72%, 4=97.95%, 10=1.27%, 20=0.02% 00:38:14.187 cpu : usr=99.35%, sys=0.20%, ctx=3, majf=0, minf=604 00:38:14.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:38:14.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:14.187 issued rwts: total=44562,44258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:14.187 00:38:14.187 Run status group 0 (all jobs): 00:38:14.187 READ: bw=87.0MiB/s (91.2MB/s), 87.0MiB/s-87.0MiB/s (91.2MB/s-91.2MB/s), io=174MiB (183MB), run=2001-2001msec 00:38:14.187 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:38:14.187 ----------------------------------------------------- 00:38:14.187 Suppressions used: 00:38:14.187 count bytes template 00:38:14.187 1 32 /usr/src/fio/parse.c 00:38:14.187 1 8 libtcmalloc_minimal.so 00:38:14.187 ----------------------------------------------------- 00:38:14.187 00:38:14.187 17:35:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:38:14.187 17:35:14 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:38:14.187 00:38:14.187 real 0m20.061s 00:38:14.187 user 0m15.789s 00:38:14.187 sys 0m3.696s 00:38:14.187 17:35:14 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.187 17:35:14 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:38:14.187 ************************************ 00:38:14.187 END TEST nvme_fio 00:38:14.187 ************************************ 00:38:14.187 ************************************ 00:38:14.187 END TEST nvme 00:38:14.187 ************************************ 00:38:14.187 00:38:14.187 real 1m36.221s 00:38:14.187 user 3m46.488s 00:38:14.187 sys 0m23.788s 00:38:14.187 17:35:14 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.187 17:35:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:38:14.446 17:35:14 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:38:14.446 17:35:14 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:38:14.446 17:35:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.446 17:35:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.446 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:38:14.446 ************************************ 00:38:14.446 START TEST nvme_scc 00:38:14.446 ************************************ 00:38:14.446 17:35:14 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:38:14.446 * Looking for test storage... 00:38:14.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@345 -- # : 1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:14.446 17:35:15 nvme_scc -- scripts/common.sh@368 -- # return 0 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.446 --rc genhtml_branch_coverage=1 00:38:14.446 --rc genhtml_function_coverage=1 00:38:14.446 --rc genhtml_legend=1 00:38:14.446 --rc geninfo_all_blocks=1 00:38:14.446 --rc geninfo_unexecuted_blocks=1 00:38:14.446 00:38:14.446 ' 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.446 --rc genhtml_branch_coverage=1 00:38:14.446 --rc genhtml_function_coverage=1 00:38:14.446 --rc genhtml_legend=1 00:38:14.446 --rc geninfo_all_blocks=1 00:38:14.446 --rc geninfo_unexecuted_blocks=1 00:38:14.446 00:38:14.446 ' 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.446 --rc genhtml_branch_coverage=1 00:38:14.446 --rc genhtml_function_coverage=1 00:38:14.446 --rc genhtml_legend=1 00:38:14.446 --rc geninfo_all_blocks=1 00:38:14.446 --rc geninfo_unexecuted_blocks=1 00:38:14.446 00:38:14.446 ' 00:38:14.446 17:35:15 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:14.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:14.447 --rc genhtml_branch_coverage=1 00:38:14.447 --rc genhtml_function_coverage=1 00:38:14.447 --rc genhtml_legend=1 00:38:14.447 --rc geninfo_all_blocks=1 00:38:14.447 --rc geninfo_unexecuted_blocks=1 00:38:14.447 00:38:14.447 ' 00:38:14.447 17:35:15 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:14.706 17:35:15 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:38:14.706 17:35:15 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:14.706 17:35:15 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.706 17:35:15 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.706 17:35:15 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.706 17:35:15 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.706 17:35:15 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.706 17:35:15 nvme_scc -- paths/export.sh@5 -- # export PATH 00:38:14.706 17:35:15 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:38:14.706 17:35:15 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:38:14.706 17:35:15 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:14.706 17:35:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:38:14.706 17:35:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:38:14.706 17:35:15 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:38:14.706 17:35:15 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:15.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:15.537 Waiting for block devices as requested 00:38:15.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:15.537 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:15.796 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:15.796 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:21.083 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:21.084 17:35:21 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:38:21.084 17:35:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:38:21.084 17:35:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:38:21.084 17:35:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:21.084 17:35:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.084 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.085 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:21.086 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.087 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.088 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.089 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:38:21.090 17:35:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:38:21.090 17:35:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:38:21.090 17:35:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:21.090 17:35:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.090 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.091 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:38:21.092 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:21.357 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:38:21.358 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.359 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.360 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:38:21.361 17:35:21 nvme_scc -- scripts/common.sh@18 -- # local i 00:38:21.361 17:35:21 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:38:21.361 17:35:21 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:21.361 17:35:21 nvme_scc -- scripts/common.sh@27 -- # return 0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.361 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:38:21.362 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:38:21.363 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.364 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.365 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.366 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.367 17:35:21 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.367 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:38:21.368 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.632 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.633 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:38:21.634 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.635 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:38:21.636 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:38:21.637 17:35:22 nvme_scc -- scripts/common.sh@18 -- # local i 00:38:21.637 17:35:22 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:38:21.637 17:35:22 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:21.637 17:35:22 nvme_scc -- scripts/common.sh@27 -- # return 0 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@18 -- # shift 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:38:21.637 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:38:21.638 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.639 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:38:21.640 17:35:22 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:38:21.640 17:35:22 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:38:21.641 17:35:22 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:38:21.641 17:35:22 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:38:21.641 17:35:22 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:38:21.641 17:35:22 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:22.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:23.149 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:38:23.149 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:23.149 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:23.149 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:38:23.408 17:35:23 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:38:23.408 17:35:23 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:23.408 17:35:23 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.408 17:35:23 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:38:23.408 ************************************ 00:38:23.408 START TEST nvme_simple_copy 00:38:23.408 ************************************ 00:38:23.408 17:35:23 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:38:23.667 Initializing NVMe Controllers 00:38:23.667 Attaching to 0000:00:10.0 00:38:23.667 Controller supports SCC. Attached to 0000:00:10.0 00:38:23.667 Namespace ID: 1 size: 6GB 00:38:23.667 Initialization complete. 00:38:23.667 00:38:23.667 Controller QEMU NVMe Ctrl (12340 ) 00:38:23.667 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:38:23.667 Namespace Block Size:4096 00:38:23.667 Writing LBAs 0 to 63 with Random Data 00:38:23.667 Copied LBAs from 0 - 63 to the Destination LBA 256 00:38:23.667 LBAs matching Written Data: 64 00:38:23.667 00:38:23.667 real 0m0.334s 00:38:23.667 user 0m0.111s 00:38:23.667 sys 0m0.122s 00:38:23.667 17:35:24 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.667 17:35:24 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:38:23.667 ************************************ 00:38:23.667 END TEST nvme_simple_copy 00:38:23.667 ************************************ 00:38:23.667 00:38:23.667 real 0m9.416s 00:38:23.667 user 0m1.711s 00:38:23.667 sys 0m2.669s 00:38:23.667 17:35:24 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.667 17:35:24 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:38:23.667 ************************************ 00:38:23.667 END TEST nvme_scc 00:38:23.667 ************************************ 00:38:23.926 17:35:24 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:38:23.926 17:35:24 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:38:23.926 17:35:24 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:38:23.926 17:35:24 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:38:23.926 17:35:24 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:38:23.926 17:35:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:23.926 17:35:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.926 17:35:24 -- common/autotest_common.sh@10 -- # set +x 00:38:23.926 ************************************ 00:38:23.926 START TEST nvme_fdp 00:38:23.926 ************************************ 00:38:23.926 17:35:24 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:38:23.926 * Looking for test storage... 00:38:23.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:23.926 17:35:24 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:23.926 17:35:24 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:38:23.926 17:35:24 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:23.926 17:35:24 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.926 17:35:24 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:38:23.927 17:35:24 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.927 17:35:24 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.927 17:35:24 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.927 17:35:24 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:38:23.927 17:35:24 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.927 17:35:24 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.927 --rc genhtml_branch_coverage=1 00:38:23.927 --rc genhtml_function_coverage=1 00:38:23.927 --rc genhtml_legend=1 00:38:23.927 --rc geninfo_all_blocks=1 00:38:23.927 --rc geninfo_unexecuted_blocks=1 00:38:23.927 00:38:23.927 ' 00:38:23.927 17:35:24 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.927 --rc genhtml_branch_coverage=1 00:38:23.927 --rc genhtml_function_coverage=1 00:38:23.927 --rc genhtml_legend=1 00:38:23.927 --rc geninfo_all_blocks=1 00:38:23.927 --rc geninfo_unexecuted_blocks=1 00:38:23.927 00:38:23.927 ' 00:38:23.927 17:35:24 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.927 --rc genhtml_branch_coverage=1 00:38:23.927 --rc genhtml_function_coverage=1 00:38:23.927 --rc genhtml_legend=1 00:38:23.927 --rc geninfo_all_blocks=1 00:38:23.927 --rc geninfo_unexecuted_blocks=1 00:38:23.927 00:38:23.927 ' 00:38:23.927 17:35:24 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:23.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.927 --rc genhtml_branch_coverage=1 00:38:23.927 --rc genhtml_function_coverage=1 00:38:23.927 --rc genhtml_legend=1 00:38:23.927 --rc geninfo_all_blocks=1 00:38:23.927 --rc geninfo_unexecuted_blocks=1 00:38:23.927 00:38:23.927 ' 00:38:23.927 17:35:24 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:38:23.927 17:35:24 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:24.186 17:35:24 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:24.186 17:35:24 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:24.186 17:35:24 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:24.186 17:35:24 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:24.186 17:35:24 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.186 17:35:24 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.186 17:35:24 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.186 17:35:24 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:38:24.186 17:35:24 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:38:24.186 17:35:24 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:38:24.186 17:35:24 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:24.186 17:35:24 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:24.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:25.014 Waiting for block devices as requested 00:38:25.014 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:25.014 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:25.273 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:25.273 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:30.553 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:30.553 17:35:31 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:38:30.553 17:35:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:38:30.553 17:35:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:38:30.553 17:35:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:30.553 17:35:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:38:30.553 17:35:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.554 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:38:30.555 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:38:30.556 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.557 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.558 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.559 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.560 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.561 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:38:30.562 17:35:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:38:30.562 17:35:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:38:30.562 17:35:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:30.562 17:35:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.562 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:38:30.563 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.564 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:38:30.565 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:38:30.566 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.567 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:38:30.568 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:38:30.569 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.836 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:38:30.837 17:35:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:38:30.837 17:35:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:38:30.837 17:35:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:30.837 17:35:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:38:30.837 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.838 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.839 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:38:30.840 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:38:30.841 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.842 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:38:30.843 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.844 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.845 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.846 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.847 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:38:30.848 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.849 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:38:30.850 17:35:31 nvme_fdp -- scripts/common.sh@18 -- # local i 00:38:30.850 17:35:31 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:38:30.850 17:35:31 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:30.850 17:35:31 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.850 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.851 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:38:30.852 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:38:30.853 17:35:31 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:38:31.113 17:35:31 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:38:31.113 17:35:31 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:38:31.113 17:35:31 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:31.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:32.620 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:38:32.620 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:32.620 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:32.620 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:38:32.620 17:35:33 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:38:32.620 17:35:33 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:32.620 17:35:33 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:32.620 17:35:33 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:38:32.620 ************************************ 00:38:32.620 START TEST nvme_flexible_data_placement 00:38:32.620 ************************************ 00:38:32.620 17:35:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:38:32.879 Initializing NVMe Controllers 00:38:32.879 Attaching to 0000:00:13.0 00:38:32.879 Controller supports FDP Attached to 0000:00:13.0 00:38:32.879 Namespace ID: 1 Endurance Group ID: 1 00:38:32.879 Initialization complete. 00:38:32.879 00:38:32.879 ================================== 00:38:32.879 == FDP tests for Namespace: #01 == 00:38:32.879 ================================== 00:38:32.879 00:38:32.879 Get Feature: FDP: 00:38:32.879 ================= 00:38:32.879 Enabled: Yes 00:38:32.879 FDP configuration Index: 0 00:38:32.879 00:38:32.879 FDP configurations log page 00:38:32.879 =========================== 00:38:32.879 Number of FDP configurations: 1 00:38:32.879 Version: 0 00:38:32.879 Size: 112 00:38:32.879 FDP Configuration Descriptor: 0 00:38:32.879 Descriptor Size: 96 00:38:32.879 Reclaim Group Identifier format: 2 00:38:32.879 FDP Volatile Write Cache: Not Present 00:38:32.879 FDP Configuration: Valid 00:38:32.879 Vendor Specific Size: 0 00:38:32.879 Number of Reclaim Groups: 2 00:38:32.879 Number of Recalim Unit Handles: 8 00:38:32.879 Max Placement Identifiers: 128 00:38:32.879 Number of Namespaces Suppprted: 256 00:38:32.879 Reclaim unit Nominal Size: 6000000 bytes 00:38:32.879 Estimated Reclaim Unit Time Limit: Not Reported 00:38:32.879 RUH Desc #000: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #001: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #002: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #003: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #004: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #005: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #006: RUH Type: Initially Isolated 00:38:32.879 RUH Desc #007: RUH Type: Initially Isolated 00:38:32.879 00:38:32.879 FDP reclaim unit handle usage log page 00:38:32.879 ====================================== 00:38:32.879 Number of Reclaim Unit Handles: 8 00:38:32.879 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:38:32.879 RUH Usage Desc #001: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #002: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #003: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #004: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #005: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #006: RUH Attributes: Unused 00:38:32.879 RUH Usage Desc #007: RUH Attributes: Unused 00:38:32.879 00:38:32.879 FDP statistics log page 00:38:32.879 ======================= 00:38:32.879 Host bytes with metadata written: 939839488 00:38:32.879 Media bytes with metadata written: 939995136 00:38:32.879 Media bytes erased: 0 00:38:32.879 00:38:32.879 FDP Reclaim unit handle status 00:38:32.879 ============================== 00:38:32.879 Number of RUHS descriptors: 2 00:38:32.879 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003fb3 00:38:32.879 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:38:32.879 00:38:32.879 FDP write on placement id: 0 success 00:38:32.879 00:38:32.879 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:38:32.879 00:38:32.879 IO mgmt send: RUH update for Placement ID: #0 Success 00:38:32.879 00:38:32.879 Get Feature: FDP Events for Placement handle: #0 00:38:32.879 ======================== 00:38:32.879 Number of FDP Events: 6 00:38:32.879 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:38:32.879 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:38:32.879 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:38:32.879 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:38:32.879 FDP Event: #4 Type: Media Reallocated Enabled: No 00:38:32.879 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:38:32.879 00:38:32.879 FDP events log page 00:38:32.879 =================== 00:38:32.879 Number of FDP events: 1 00:38:32.879 FDP Event #0: 00:38:32.879 Event Type: RU Not Written to Capacity 00:38:32.879 Placement Identifier: Valid 00:38:32.879 NSID: Valid 00:38:32.879 Location: Valid 00:38:32.879 Placement Identifier: 0 00:38:32.879 Event Timestamp: 8 00:38:32.879 Namespace Identifier: 1 00:38:32.879 Reclaim Group Identifier: 0 00:38:32.879 Reclaim Unit Handle Identifier: 0 00:38:32.879 00:38:32.879 FDP test passed 00:38:32.879 00:38:32.879 real 0m0.305s 00:38:32.879 user 0m0.091s 00:38:32.879 sys 0m0.113s 00:38:32.879 17:35:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.879 17:35:33 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:38:32.879 ************************************ 00:38:32.879 END TEST nvme_flexible_data_placement 00:38:32.879 ************************************ 00:38:33.139 ************************************ 00:38:33.139 END TEST nvme_fdp 00:38:33.139 ************************************ 00:38:33.139 00:38:33.139 real 0m9.180s 00:38:33.139 user 0m1.629s 00:38:33.139 sys 0m2.643s 00:38:33.139 17:35:33 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.139 17:35:33 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:38:33.139 17:35:33 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:38:33.139 17:35:33 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:38:33.139 17:35:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:33.139 17:35:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.139 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:38:33.139 ************************************ 00:38:33.139 START TEST nvme_rpc 00:38:33.139 ************************************ 00:38:33.139 17:35:33 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:38:33.139 * Looking for test storage... 00:38:33.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:33.139 17:35:33 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:33.139 17:35:33 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:38:33.139 17:35:33 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:33.399 17:35:33 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.399 --rc genhtml_branch_coverage=1 00:38:33.399 --rc genhtml_function_coverage=1 00:38:33.399 --rc genhtml_legend=1 00:38:33.399 --rc geninfo_all_blocks=1 00:38:33.399 --rc geninfo_unexecuted_blocks=1 00:38:33.399 00:38:33.399 ' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.399 --rc genhtml_branch_coverage=1 00:38:33.399 --rc genhtml_function_coverage=1 00:38:33.399 --rc genhtml_legend=1 00:38:33.399 --rc geninfo_all_blocks=1 00:38:33.399 --rc geninfo_unexecuted_blocks=1 00:38:33.399 00:38:33.399 ' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.399 --rc genhtml_branch_coverage=1 00:38:33.399 --rc genhtml_function_coverage=1 00:38:33.399 --rc genhtml_legend=1 00:38:33.399 --rc geninfo_all_blocks=1 00:38:33.399 --rc geninfo_unexecuted_blocks=1 00:38:33.399 00:38:33.399 ' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:33.399 --rc genhtml_branch_coverage=1 00:38:33.399 --rc genhtml_function_coverage=1 00:38:33.399 --rc genhtml_legend=1 00:38:33.399 --rc geninfo_all_blocks=1 00:38:33.399 --rc geninfo_unexecuted_blocks=1 00:38:33.399 00:38:33.399 ' 00:38:33.399 17:35:33 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:33.399 17:35:33 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:38:33.399 17:35:33 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:38:33.399 17:35:34 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:38:33.399 17:35:34 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67152 00:38:33.399 17:35:34 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:38:33.399 17:35:34 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:38:33.399 17:35:34 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67152 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67152 ']' 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.399 17:35:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:33.658 [2024-11-26 17:35:34.120696] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:38:33.658 [2024-11-26 17:35:34.120836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67152 ] 00:38:33.658 [2024-11-26 17:35:34.305863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:33.917 [2024-11-26 17:35:34.456675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.917 [2024-11-26 17:35:34.456728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.293 17:35:35 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.293 17:35:35 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:38:35.293 17:35:35 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:38:35.293 Nvme0n1 00:38:35.293 17:35:35 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:38:35.293 17:35:35 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:38:35.552 request: 00:38:35.552 { 00:38:35.552 "bdev_name": "Nvme0n1", 00:38:35.552 "filename": "non_existing_file", 00:38:35.552 "method": "bdev_nvme_apply_firmware", 00:38:35.552 "req_id": 1 00:38:35.552 } 00:38:35.552 Got JSON-RPC error response 00:38:35.552 response: 00:38:35.552 { 00:38:35.552 "code": -32603, 00:38:35.552 "message": "open file failed." 00:38:35.552 } 00:38:35.552 17:35:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:38:35.552 17:35:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:38:35.552 17:35:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:38:35.811 17:35:36 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:38:35.811 17:35:36 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67152 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67152 ']' 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67152 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67152 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:35.811 killing process with pid 67152 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67152' 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67152 00:38:35.811 17:35:36 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67152 00:38:38.347 00:38:38.347 real 0m5.257s 00:38:38.347 user 0m9.409s 00:38:38.347 sys 0m1.001s 00:38:38.347 17:35:38 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.347 ************************************ 00:38:38.347 END TEST nvme_rpc 00:38:38.347 ************************************ 00:38:38.347 17:35:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:38:38.347 17:35:38 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:38:38.347 17:35:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:38.347 17:35:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.347 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:38:38.347 ************************************ 00:38:38.347 START TEST nvme_rpc_timeouts 00:38:38.347 ************************************ 00:38:38.347 17:35:38 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:38:38.605 * Looking for test storage... 00:38:38.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.606 17:35:39 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.606 --rc genhtml_branch_coverage=1 00:38:38.606 --rc genhtml_function_coverage=1 00:38:38.606 --rc genhtml_legend=1 00:38:38.606 --rc geninfo_all_blocks=1 00:38:38.606 --rc geninfo_unexecuted_blocks=1 00:38:38.606 00:38:38.606 ' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.606 --rc genhtml_branch_coverage=1 00:38:38.606 --rc genhtml_function_coverage=1 00:38:38.606 --rc genhtml_legend=1 00:38:38.606 --rc geninfo_all_blocks=1 00:38:38.606 --rc geninfo_unexecuted_blocks=1 00:38:38.606 00:38:38.606 ' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.606 --rc genhtml_branch_coverage=1 00:38:38.606 --rc genhtml_function_coverage=1 00:38:38.606 --rc genhtml_legend=1 00:38:38.606 --rc geninfo_all_blocks=1 00:38:38.606 --rc geninfo_unexecuted_blocks=1 00:38:38.606 00:38:38.606 ' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:38.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.606 --rc genhtml_branch_coverage=1 00:38:38.606 --rc genhtml_function_coverage=1 00:38:38.606 --rc genhtml_legend=1 00:38:38.606 --rc geninfo_all_blocks=1 00:38:38.606 --rc geninfo_unexecuted_blocks=1 00:38:38.606 00:38:38.606 ' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67239 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67239 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67271 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:38:38.606 17:35:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67271 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67271 ']' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.606 17:35:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:38:38.864 [2024-11-26 17:35:39.351067] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:38:38.864 [2024-11-26 17:35:39.351224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67271 ] 00:38:38.864 [2024-11-26 17:35:39.541322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:39.122 [2024-11-26 17:35:39.697409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.122 [2024-11-26 17:35:39.697455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:40.502 17:35:40 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:40.502 17:35:40 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:38:40.502 Checking default timeout settings: 00:38:40.502 17:35:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:38:40.502 17:35:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:40.502 Making settings changes with rpc: 00:38:40.502 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:38:40.502 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:38:40.760 Check default vs. modified settings: 00:38:40.760 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:38:40.760 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67239 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67239 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:38:41.328 Setting action_on_timeout is changed as expected. 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67239 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67239 00:38:41.328 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:38:41.329 Setting timeout_us is changed as expected. 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67239 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67239 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:38:41.329 Setting timeout_admin_us is changed as expected. 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67239 /tmp/settings_modified_67239 00:38:41.329 17:35:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67271 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67271 ']' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67271 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67271 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:41.329 killing process with pid 67271 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67271' 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67271 00:38:41.329 17:35:41 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67271 00:38:44.620 RPC TIMEOUT SETTING TEST PASSED. 00:38:44.620 17:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:38:44.620 00:38:44.620 real 0m5.703s 00:38:44.620 user 0m10.577s 00:38:44.620 sys 0m1.033s 00:38:44.620 17:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.620 17:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:38:44.620 ************************************ 00:38:44.620 END TEST nvme_rpc_timeouts 00:38:44.620 ************************************ 00:38:44.620 17:35:44 -- spdk/autotest.sh@239 -- # uname -s 00:38:44.620 17:35:44 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:38:44.620 17:35:44 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:38:44.620 17:35:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:44.620 17:35:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.620 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:38:44.620 ************************************ 00:38:44.620 START TEST sw_hotplug 00:38:44.620 ************************************ 00:38:44.620 17:35:44 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:38:44.620 * Looking for test storage... 00:38:44.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:38:44.620 17:35:44 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:44.620 17:35:44 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:38:44.620 17:35:44 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:44.620 17:35:44 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:38:44.620 17:35:44 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:44.620 17:35:45 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:38:44.620 17:35:45 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:38:44.620 17:35:45 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:44.620 17:35:45 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:44.620 17:35:45 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:38:44.620 17:35:45 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:44.620 17:35:45 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:44.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.620 --rc genhtml_branch_coverage=1 00:38:44.620 --rc genhtml_function_coverage=1 00:38:44.620 --rc genhtml_legend=1 00:38:44.620 --rc geninfo_all_blocks=1 00:38:44.620 --rc geninfo_unexecuted_blocks=1 00:38:44.620 00:38:44.620 ' 00:38:44.620 17:35:45 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:44.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.620 --rc genhtml_branch_coverage=1 00:38:44.620 --rc genhtml_function_coverage=1 00:38:44.620 --rc genhtml_legend=1 00:38:44.620 --rc geninfo_all_blocks=1 00:38:44.620 --rc geninfo_unexecuted_blocks=1 00:38:44.620 00:38:44.620 ' 00:38:44.620 17:35:45 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:44.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.620 --rc genhtml_branch_coverage=1 00:38:44.620 --rc genhtml_function_coverage=1 00:38:44.620 --rc genhtml_legend=1 00:38:44.620 --rc geninfo_all_blocks=1 00:38:44.620 --rc geninfo_unexecuted_blocks=1 00:38:44.620 00:38:44.620 ' 00:38:44.620 17:35:45 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:44.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.620 --rc genhtml_branch_coverage=1 00:38:44.620 --rc genhtml_function_coverage=1 00:38:44.620 --rc genhtml_legend=1 00:38:44.620 --rc geninfo_all_blocks=1 00:38:44.620 --rc geninfo_unexecuted_blocks=1 00:38:44.620 00:38:44.620 ' 00:38:44.620 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:45.187 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:45.187 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:45.187 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:45.187 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:45.187 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:45.501 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:38:45.501 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:38:45.501 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:38:45.501 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@233 -- # local class 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:45.501 17:35:45 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@18 -- # local i 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:38:45.502 17:35:45 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:38:45.502 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:38:45.502 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:38:45.502 17:35:45 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:46.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:46.329 Waiting for block devices as requested 00:38:46.329 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:46.587 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:46.587 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:46.587 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:51.854 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:51.854 17:35:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:38:51.854 17:35:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:52.423 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:38:52.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:52.423 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:38:52.991 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:38:53.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:53.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:53.250 17:35:53 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:38:53.250 17:35:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68174 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:38:53.510 17:35:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:38:53.510 17:35:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:38:53.510 17:35:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:38:53.510 17:35:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:38:53.510 17:35:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:38:53.510 17:35:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:38:53.770 Initializing NVMe Controllers 00:38:53.770 Attaching to 0000:00:10.0 00:38:53.770 Attaching to 0000:00:11.0 00:38:53.770 Attached to 0000:00:10.0 00:38:53.770 Attached to 0000:00:11.0 00:38:53.770 Initialization complete. Starting I/O... 00:38:53.770 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:38:53.770 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:38:53.770 00:38:54.710 QEMU NVMe Ctrl (12340 ): 1456 I/Os completed (+1456) 00:38:54.710 QEMU NVMe Ctrl (12341 ): 1458 I/Os completed (+1458) 00:38:54.710 00:38:55.647 QEMU NVMe Ctrl (12340 ): 3392 I/Os completed (+1936) 00:38:55.647 QEMU NVMe Ctrl (12341 ): 3394 I/Os completed (+1936) 00:38:55.647 00:38:56.595 QEMU NVMe Ctrl (12340 ): 5424 I/Os completed (+2032) 00:38:56.595 QEMU NVMe Ctrl (12341 ): 5426 I/Os completed (+2032) 00:38:56.595 00:38:57.975 QEMU NVMe Ctrl (12340 ): 7444 I/Os completed (+2020) 00:38:57.975 QEMU NVMe Ctrl (12341 ): 7446 I/Os completed (+2020) 00:38:57.975 00:38:58.913 QEMU NVMe Ctrl (12340 ): 9436 I/Os completed (+1992) 00:38:58.913 QEMU NVMe Ctrl (12341 ): 9438 I/Os completed (+1992) 00:38:58.913 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:38:59.482 [2024-11-26 17:36:00.026260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:38:59.482 Controller removed: QEMU NVMe Ctrl (12340 ) 00:38:59.482 [2024-11-26 17:36:00.028294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.028470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.028541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.028641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:38:59.482 [2024-11-26 17:36:00.032088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.032258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.032317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.032412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:38:59.482 [2024-11-26 17:36:00.062542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:38:59.482 Controller removed: QEMU NVMe Ctrl (12341 ) 00:38:59.482 [2024-11-26 17:36:00.064346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.064595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.064636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.064657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:38:59.482 [2024-11-26 17:36:00.067443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.067491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.067634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 [2024-11-26 17:36:00.067654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:38:59.482 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:38:59.482 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:38:59.482 EAL: Scan for (pci) bus failed. 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:38:59.741 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:38:59.741 Attaching to 0000:00:10.0 00:38:59.741 Attached to 0000:00:10.0 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:38:59.741 17:36:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:38:59.741 Attaching to 0000:00:11.0 00:38:59.741 Attached to 0000:00:11.0 00:39:00.677 QEMU NVMe Ctrl (12340 ): 1960 I/Os completed (+1960) 00:39:00.677 QEMU NVMe Ctrl (12341 ): 1736 I/Os completed (+1736) 00:39:00.677 00:39:01.613 QEMU NVMe Ctrl (12340 ): 3781 I/Os completed (+1821) 00:39:01.613 QEMU NVMe Ctrl (12341 ): 3556 I/Os completed (+1820) 00:39:01.613 00:39:02.989 QEMU NVMe Ctrl (12340 ): 5613 I/Os completed (+1832) 00:39:02.989 QEMU NVMe Ctrl (12341 ): 5393 I/Os completed (+1837) 00:39:02.989 00:39:03.921 QEMU NVMe Ctrl (12340 ): 7390 I/Os completed (+1777) 00:39:03.921 QEMU NVMe Ctrl (12341 ): 7164 I/Os completed (+1771) 00:39:03.921 00:39:04.854 QEMU NVMe Ctrl (12340 ): 9278 I/Os completed (+1888) 00:39:04.854 QEMU NVMe Ctrl (12341 ): 9052 I/Os completed (+1888) 00:39:04.854 00:39:05.791 QEMU NVMe Ctrl (12340 ): 11062 I/Os completed (+1784) 00:39:05.791 QEMU NVMe Ctrl (12341 ): 10839 I/Os completed (+1787) 00:39:05.791 00:39:06.727 QEMU NVMe Ctrl (12340 ): 13010 I/Os completed (+1948) 00:39:06.727 QEMU NVMe Ctrl (12341 ): 12787 I/Os completed (+1948) 00:39:06.727 00:39:07.681 QEMU NVMe Ctrl (12340 ): 14794 I/Os completed (+1784) 00:39:07.681 QEMU NVMe Ctrl (12341 ): 14571 I/Os completed (+1784) 00:39:07.681 00:39:08.616 QEMU NVMe Ctrl (12340 ): 16710 I/Os completed (+1916) 00:39:08.616 QEMU NVMe Ctrl (12341 ): 16487 I/Os completed (+1916) 00:39:08.616 00:39:09.995 QEMU NVMe Ctrl (12340 ): 18474 I/Os completed (+1764) 00:39:09.995 QEMU NVMe Ctrl (12341 ): 18251 I/Os completed (+1764) 00:39:09.995 00:39:10.562 QEMU NVMe Ctrl (12340 ): 20430 I/Os completed (+1956) 00:39:10.562 QEMU NVMe Ctrl (12341 ): 20207 I/Os completed (+1956) 00:39:10.562 00:39:11.939 QEMU NVMe Ctrl (12340 ): 22210 I/Os completed (+1780) 00:39:11.939 QEMU NVMe Ctrl (12341 ): 21995 I/Os completed (+1788) 00:39:11.939 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:11.939 [2024-11-26 17:36:12.402361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:39:11.939 Controller removed: QEMU NVMe Ctrl (12340 ) 00:39:11.939 [2024-11-26 17:36:12.407658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.407873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.407935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.408039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:11.939 [2024-11-26 17:36:12.411473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.411647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.411703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.411839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:11.939 [2024-11-26 17:36:12.443096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:39:11.939 Controller removed: QEMU NVMe Ctrl (12341 ) 00:39:11.939 [2024-11-26 17:36:12.445000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.445193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.445233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.445255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:39:11.939 [2024-11-26 17:36:12.448309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.448472] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.448514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 [2024-11-26 17:36:12.448537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:39:11.939 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:11.939 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:39:11.939 EAL: Scan for (pci) bus failed. 00:39:11.940 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:11.940 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:11.940 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:12.198 Attaching to 0000:00:10.0 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:39:12.198 Attached to 0000:00:10.0 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:12.198 17:36:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:39:12.198 Attaching to 0000:00:11.0 00:39:12.198 Attached to 0000:00:11.0 00:39:12.765 QEMU NVMe Ctrl (12340 ): 1032 I/Os completed (+1032) 00:39:12.765 QEMU NVMe Ctrl (12341 ): 780 I/Os completed (+780) 00:39:12.765 00:39:13.701 QEMU NVMe Ctrl (12340 ): 2776 I/Os completed (+1744) 00:39:13.701 QEMU NVMe Ctrl (12341 ): 2524 I/Os completed (+1744) 00:39:13.701 00:39:14.635 QEMU NVMe Ctrl (12340 ): 4636 I/Os completed (+1860) 00:39:14.635 QEMU NVMe Ctrl (12341 ): 4386 I/Os completed (+1862) 00:39:14.635 00:39:15.568 QEMU NVMe Ctrl (12340 ): 6456 I/Os completed (+1820) 00:39:15.568 QEMU NVMe Ctrl (12341 ): 6206 I/Os completed (+1820) 00:39:15.568 00:39:16.627 QEMU NVMe Ctrl (12340 ): 8316 I/Os completed (+1860) 00:39:16.627 QEMU NVMe Ctrl (12341 ): 8066 I/Os completed (+1860) 00:39:16.627 00:39:17.583 QEMU NVMe Ctrl (12340 ): 10128 I/Os completed (+1812) 00:39:17.583 QEMU NVMe Ctrl (12341 ): 9878 I/Os completed (+1812) 00:39:17.583 00:39:18.960 QEMU NVMe Ctrl (12340 ): 11996 I/Os completed (+1868) 00:39:18.960 QEMU NVMe Ctrl (12341 ): 11748 I/Os completed (+1870) 00:39:18.960 00:39:19.894 QEMU NVMe Ctrl (12340 ): 13868 I/Os completed (+1872) 00:39:19.894 QEMU NVMe Ctrl (12341 ): 13620 I/Os completed (+1872) 00:39:19.894 00:39:20.831 QEMU NVMe Ctrl (12340 ): 15768 I/Os completed (+1900) 00:39:20.831 QEMU NVMe Ctrl (12341 ): 15522 I/Os completed (+1902) 00:39:20.831 00:39:21.768 QEMU NVMe Ctrl (12340 ): 17616 I/Os completed (+1848) 00:39:21.768 QEMU NVMe Ctrl (12341 ): 17370 I/Os completed (+1848) 00:39:21.768 00:39:22.709 QEMU NVMe Ctrl (12340 ): 19395 I/Os completed (+1779) 00:39:22.709 QEMU NVMe Ctrl (12341 ): 19149 I/Os completed (+1779) 00:39:22.709 00:39:23.647 QEMU NVMe Ctrl (12340 ): 21211 I/Os completed (+1816) 00:39:23.647 QEMU NVMe Ctrl (12341 ): 20970 I/Os completed (+1821) 00:39:23.647 00:39:24.215 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:24.216 [2024-11-26 17:36:24.809442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:39:24.216 Controller removed: QEMU NVMe Ctrl (12340 ) 00:39:24.216 [2024-11-26 17:36:24.811592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.811698] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.811748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.811808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:24.216 [2024-11-26 17:36:24.815141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.815301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.815367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.815506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:24.216 EAL: Scan for (pci) bus failed. 00:39:24.216 [2024-11-26 17:36:24.849956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:39:24.216 Controller removed: QEMU NVMe Ctrl (12341 ) 00:39:24.216 [2024-11-26 17:36:24.852049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.852218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.852281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.852401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:39:24.216 [2024-11-26 17:36:24.855425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.855614] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.855682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 [2024-11-26 17:36:24.855812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:39:24.216 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:24.475 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:24.475 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:24.475 17:36:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:24.475 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:24.475 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:24.475 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:24.475 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:24.475 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:39:24.475 Attaching to 0000:00:10.0 00:39:24.475 Attached to 0000:00:10.0 00:39:24.734 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:39:24.734 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:24.734 17:36:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:39:24.734 Attaching to 0000:00:11.0 00:39:24.734 Attached to 0000:00:11.0 00:39:24.734 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:39:24.734 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:39:24.734 [2024-11-26 17:36:25.211822] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:39:36.943 17:36:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:39:36.944 17:36:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:36.944 17:36:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.18 00:39:36.944 17:36:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.18 00:39:36.944 17:36:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:39:36.944 17:36:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.18 00:39:36.944 17:36:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.18 2 00:39:36.944 remove_attach_helper took 43.18s to complete (handling 2 nvme drive(s)) 17:36:37 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68174 00:39:43.513 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68174) - No such process 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68174 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68715 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:39:43.513 17:36:43 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68715 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68715 ']' 00:39:43.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:43.513 17:36:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:43.513 [2024-11-26 17:36:43.349136] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:39:43.513 [2024-11-26 17:36:43.349277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68715 ] 00:39:43.513 [2024-11-26 17:36:43.540964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.513 [2024-11-26 17:36:43.697004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:39:44.450 17:36:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:39:44.450 17:36:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:39:51.025 17:36:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.025 17:36:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:51.025 [2024-11-26 17:36:50.900861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:39:51.025 [2024-11-26 17:36:50.903858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:50.903914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:50.903935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:50.903970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:50.903982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:50.903999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:50.904013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:50.904028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:50.904041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:50.904063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:50.904074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:50.904090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 17:36:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:39:51.025 17:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:39:51.025 [2024-11-26 17:36:51.300203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:39:51.025 [2024-11-26 17:36:51.303028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:51.303075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:51.303114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:51.303144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:51.303160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:51.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:51.303190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:51.303201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:51.303217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 [2024-11-26 17:36:51.303230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:39:51.025 [2024-11-26 17:36:51.303244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:39:51.025 [2024-11-26 17:36:51.303256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:39:51.025 17:36:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:51.025 17:36:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:39:51.025 17:36:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:51.025 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:39:51.026 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:39:51.285 17:36:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:03.500 17:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:03.500 [2024-11-26 17:37:03.979803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:40:03.500 [2024-11-26 17:37:03.982546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.500 [2024-11-26 17:37:03.982706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.500 [2024-11-26 17:37:03.982749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.500 [2024-11-26 17:37:03.982784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.500 [2024-11-26 17:37:03.982799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.500 [2024-11-26 17:37:03.982816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.500 [2024-11-26 17:37:03.982832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.500 [2024-11-26 17:37:03.982847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.500 [2024-11-26 17:37:03.982860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.500 [2024-11-26 17:37:03.982878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.500 [2024-11-26 17:37:03.982890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.500 [2024-11-26 17:37:03.982905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.500 17:37:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.500 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:40:03.500 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:03.759 [2024-11-26 17:37:04.379142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:40:03.759 [2024-11-26 17:37:04.381841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.759 [2024-11-26 17:37:04.382037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.759 [2024-11-26 17:37:04.382075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.759 [2024-11-26 17:37:04.382106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.759 [2024-11-26 17:37:04.382123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.759 [2024-11-26 17:37:04.382137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.759 [2024-11-26 17:37:04.382156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.759 [2024-11-26 17:37:04.382168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.759 [2024-11-26 17:37:04.382184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.759 [2024-11-26 17:37:04.382199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:03.759 [2024-11-26 17:37:04.382215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:03.759 [2024-11-26 17:37:04.382228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:04.081 17:37:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:04.081 17:37:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:04.081 17:37:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:04.081 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:04.354 17:37:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:16.561 17:37:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.561 17:37:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:16.561 17:37:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:16.561 [2024-11-26 17:37:16.959032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:40:16.561 [2024-11-26 17:37:16.962393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.561 [2024-11-26 17:37:16.962570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.561 [2024-11-26 17:37:16.962692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.561 [2024-11-26 17:37:16.962779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.561 [2024-11-26 17:37:16.962814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.561 [2024-11-26 17:37:16.962939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.561 [2024-11-26 17:37:16.963004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.561 [2024-11-26 17:37:16.963043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.561 [2024-11-26 17:37:16.963149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.561 [2024-11-26 17:37:16.963217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.561 [2024-11-26 17:37:16.963253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.561 [2024-11-26 17:37:16.963360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:16.561 17:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:16.561 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:16.561 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:16.561 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:16.561 17:37:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.561 17:37:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:16.561 17:37:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.561 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:16.561 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:16.820 [2024-11-26 17:37:17.358418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:40:16.820 [2024-11-26 17:37:17.361539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.820 [2024-11-26 17:37:17.361586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.820 [2024-11-26 17:37:17.361609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.820 [2024-11-26 17:37:17.361638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.820 [2024-11-26 17:37:17.361654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.820 [2024-11-26 17:37:17.361667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.820 [2024-11-26 17:37:17.361684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.820 [2024-11-26 17:37:17.361696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.820 [2024-11-26 17:37:17.361715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:16.820 [2024-11-26 17:37:17.361728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:16.820 [2024-11-26 17:37:17.361742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:16.820 [2024-11-26 17:37:17.361754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:17.079 17:37:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:17.079 17:37:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:17.079 17:37:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:17.079 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:17.337 17:37:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:40:29.581 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:29.581 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:40:29.582 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:29.582 17:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:40:29.582 17:37:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:40:29.582 17:37:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:40:29.582 17:37:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:40:29.582 17:37:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:40:29.582 17:37:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:40:29.582 17:37:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:40:29.582 17:37:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:40:29.582 17:37:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:40:29.582 17:37:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:40:29.582 17:37:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:40:29.582 17:37:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:36.194 [2024-11-26 17:37:36.082262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:40:36.194 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:36.194 17:37:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.194 [2024-11-26 17:37:36.084663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.194 [2024-11-26 17:37:36.084708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.194 [2024-11-26 17:37:36.084726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.194 [2024-11-26 17:37:36.084753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.194 [2024-11-26 17:37:36.084765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.194 [2024-11-26 17:37:36.084779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.194 [2024-11-26 17:37:36.084792] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.194 [2024-11-26 17:37:36.084806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.194 [2024-11-26 17:37:36.084817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.194 [2024-11-26 17:37:36.084834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.195 [2024-11-26 17:37:36.084845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.195 [2024-11-26 17:37:36.084863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.195 17:37:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:36.195 17:37:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:36.195 [2024-11-26 17:37:36.481598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:40:36.195 [2024-11-26 17:37:36.483275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.195 [2024-11-26 17:37:36.483315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.195 [2024-11-26 17:37:36.483333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.195 [2024-11-26 17:37:36.483351] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.195 [2024-11-26 17:37:36.483365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.195 [2024-11-26 17:37:36.483387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.195 [2024-11-26 17:37:36.483405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.195 [2024-11-26 17:37:36.483416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.195 [2024-11-26 17:37:36.483430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.195 [2024-11-26 17:37:36.483443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:36.195 [2024-11-26 17:37:36.483456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:36.195 [2024-11-26 17:37:36.483467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:36.195 17:37:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:36.195 17:37:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:36.195 17:37:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:36.195 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:36.454 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:36.454 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:36.454 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:36.454 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:40:36.454 17:37:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:40:36.454 17:37:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:36.454 17:37:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:48.670 17:37:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.670 17:37:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:48.670 17:37:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:48.670 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:48.670 17:37:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.670 17:37:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:48.670 [2024-11-26 17:37:49.161197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:40:48.670 [2024-11-26 17:37:49.163370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.670 [2024-11-26 17:37:49.163448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.670 [2024-11-26 17:37:49.163465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.670 [2024-11-26 17:37:49.163491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.670 [2024-11-26 17:37:49.163516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.670 [2024-11-26 17:37:49.163532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.670 [2024-11-26 17:37:49.163546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.671 [2024-11-26 17:37:49.163562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.671 [2024-11-26 17:37:49.163574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.671 [2024-11-26 17:37:49.163589] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.671 [2024-11-26 17:37:49.163601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.671 [2024-11-26 17:37:49.163616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.671 17:37:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.671 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:40:48.671 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:40:48.930 [2024-11-26 17:37:49.560550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:40:48.930 [2024-11-26 17:37:49.562235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.930 [2024-11-26 17:37:49.562272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.930 [2024-11-26 17:37:49.562291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.930 [2024-11-26 17:37:49.562308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.930 [2024-11-26 17:37:49.562325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.930 [2024-11-26 17:37:49.562338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.930 [2024-11-26 17:37:49.562353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.930 [2024-11-26 17:37:49.562364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.930 [2024-11-26 17:37:49.562378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.930 [2024-11-26 17:37:49.562391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:40:48.930 [2024-11-26 17:37:49.562404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:40:48.930 [2024-11-26 17:37:49.562415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:40:49.190 17:37:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:49.190 17:37:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:40:49.190 17:37:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:49.190 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:40:49.449 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:40:49.449 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:49.449 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:40:49.449 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:40:49.449 17:37:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:40:49.449 17:37:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:40:49.449 17:37:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:40:49.449 17:37:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:01.663 17:38:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.663 17:38:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:01.663 17:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:41:01.663 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:41:01.663 [2024-11-26 17:38:02.140303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:41:01.663 [2024-11-26 17:38:02.143071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:01.663 [2024-11-26 17:38:02.143239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:01.663 [2024-11-26 17:38:02.143361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:01.663 [2024-11-26 17:38:02.143520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:01.663 [2024-11-26 17:38:02.143571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:01.663 [2024-11-26 17:38:02.143705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:01.663 [2024-11-26 17:38:02.143768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:01.663 [2024-11-26 17:38:02.143878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:01.663 [2024-11-26 17:38:02.143996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:01.663 [2024-11-26 17:38:02.144103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:01.663 [2024-11-26 17:38:02.144146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:01.663 [2024-11-26 17:38:02.144257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:41:01.664 17:38:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.664 17:38:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:01.664 17:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:41:01.664 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:41:02.234 [2024-11-26 17:38:02.639491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:41:02.234 [2024-11-26 17:38:02.641764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:02.234 [2024-11-26 17:38:02.641903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.234 [2024-11-26 17:38:02.642071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.234 [2024-11-26 17:38:02.642188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:02.234 [2024-11-26 17:38:02.642230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.234 [2024-11-26 17:38:02.642324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.234 [2024-11-26 17:38:02.642430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:02.234 [2024-11-26 17:38:02.642448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.234 [2024-11-26 17:38:02.642463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.234 [2024-11-26 17:38:02.642477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:02.234 [2024-11-26 17:38:02.642511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:02.234 [2024-11-26 17:38:02.642524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:02.234 17:38:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:02.234 17:38:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:02.234 17:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:41:02.234 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:41:02.494 17:38:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:41:02.494 17:38:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:41:14.712 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:41:14.712 17:38:15 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68715 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68715 ']' 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68715 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68715 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:14.712 killing process with pid 68715 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68715' 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68715 00:41:14.712 17:38:15 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68715 00:41:17.274 17:38:17 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:17.534 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:18.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:18.103 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:41:18.103 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:41:18.362 00:41:18.362 real 2m34.143s 00:41:18.362 user 1m52.627s 00:41:18.362 sys 0m21.845s 00:41:18.362 ************************************ 00:41:18.362 END TEST sw_hotplug 00:41:18.362 ************************************ 00:41:18.362 17:38:18 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:18.362 17:38:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:18.362 17:38:18 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:41:18.362 17:38:18 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:41:18.362 17:38:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:18.362 17:38:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:18.362 17:38:18 -- common/autotest_common.sh@10 -- # set +x 00:41:18.362 ************************************ 00:41:18.362 START TEST nvme_xnvme 00:41:18.362 ************************************ 00:41:18.362 17:38:18 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:41:18.624 * Looking for test storage... 00:41:18.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.624 17:38:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.624 --rc genhtml_branch_coverage=1 00:41:18.624 --rc genhtml_function_coverage=1 00:41:18.624 --rc genhtml_legend=1 00:41:18.624 --rc geninfo_all_blocks=1 00:41:18.624 --rc geninfo_unexecuted_blocks=1 00:41:18.624 00:41:18.624 ' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.624 --rc genhtml_branch_coverage=1 00:41:18.624 --rc genhtml_function_coverage=1 00:41:18.624 --rc genhtml_legend=1 00:41:18.624 --rc geninfo_all_blocks=1 00:41:18.624 --rc geninfo_unexecuted_blocks=1 00:41:18.624 00:41:18.624 ' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.624 --rc genhtml_branch_coverage=1 00:41:18.624 --rc genhtml_function_coverage=1 00:41:18.624 --rc genhtml_legend=1 00:41:18.624 --rc geninfo_all_blocks=1 00:41:18.624 --rc geninfo_unexecuted_blocks=1 00:41:18.624 00:41:18.624 ' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:18.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.624 --rc genhtml_branch_coverage=1 00:41:18.624 --rc genhtml_function_coverage=1 00:41:18.624 --rc genhtml_legend=1 00:41:18.624 --rc geninfo_all_blocks=1 00:41:18.624 --rc geninfo_unexecuted_blocks=1 00:41:18.624 00:41:18.624 ' 00:41:18.624 17:38:19 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:41:18.624 17:38:19 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:41:18.624 17:38:19 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:41:18.624 17:38:19 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:41:18.625 17:38:19 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:41:18.625 17:38:19 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:41:18.625 17:38:19 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:41:18.625 #define SPDK_CONFIG_H 00:41:18.625 #define SPDK_CONFIG_AIO_FSDEV 1 00:41:18.625 #define SPDK_CONFIG_APPS 1 00:41:18.625 #define SPDK_CONFIG_ARCH native 00:41:18.625 #define SPDK_CONFIG_ASAN 1 00:41:18.625 #undef SPDK_CONFIG_AVAHI 00:41:18.625 #undef SPDK_CONFIG_CET 00:41:18.625 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:41:18.625 #define SPDK_CONFIG_COVERAGE 1 00:41:18.625 #define SPDK_CONFIG_CROSS_PREFIX 00:41:18.625 #undef SPDK_CONFIG_CRYPTO 00:41:18.625 #undef SPDK_CONFIG_CRYPTO_MLX5 00:41:18.625 #undef SPDK_CONFIG_CUSTOMOCF 00:41:18.625 #undef SPDK_CONFIG_DAOS 00:41:18.625 #define SPDK_CONFIG_DAOS_DIR 00:41:18.625 #define SPDK_CONFIG_DEBUG 1 00:41:18.625 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:41:18.625 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:41:18.625 #define SPDK_CONFIG_DPDK_INC_DIR 00:41:18.625 #define SPDK_CONFIG_DPDK_LIB_DIR 00:41:18.625 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:41:18.625 #undef SPDK_CONFIG_DPDK_UADK 00:41:18.625 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:41:18.625 #define SPDK_CONFIG_EXAMPLES 1 00:41:18.625 #undef SPDK_CONFIG_FC 00:41:18.625 #define SPDK_CONFIG_FC_PATH 00:41:18.625 #define SPDK_CONFIG_FIO_PLUGIN 1 00:41:18.625 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:41:18.625 #define SPDK_CONFIG_FSDEV 1 00:41:18.625 #undef SPDK_CONFIG_FUSE 00:41:18.625 #undef SPDK_CONFIG_FUZZER 00:41:18.625 #define SPDK_CONFIG_FUZZER_LIB 00:41:18.625 #undef SPDK_CONFIG_GOLANG 00:41:18.625 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:41:18.625 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:41:18.625 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:41:18.625 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:41:18.625 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:41:18.625 #undef SPDK_CONFIG_HAVE_LIBBSD 00:41:18.625 #undef SPDK_CONFIG_HAVE_LZ4 00:41:18.625 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:41:18.625 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:41:18.625 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:41:18.625 #define SPDK_CONFIG_IDXD 1 00:41:18.625 #define SPDK_CONFIG_IDXD_KERNEL 1 00:41:18.625 #undef SPDK_CONFIG_IPSEC_MB 00:41:18.625 #define SPDK_CONFIG_IPSEC_MB_DIR 00:41:18.625 #define SPDK_CONFIG_ISAL 1 00:41:18.625 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:41:18.625 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:41:18.625 #define SPDK_CONFIG_LIBDIR 00:41:18.625 #undef SPDK_CONFIG_LTO 00:41:18.625 #define SPDK_CONFIG_MAX_LCORES 128 00:41:18.625 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:41:18.625 #define SPDK_CONFIG_NVME_CUSE 1 00:41:18.625 #undef SPDK_CONFIG_OCF 00:41:18.625 #define SPDK_CONFIG_OCF_PATH 00:41:18.625 #define SPDK_CONFIG_OPENSSL_PATH 00:41:18.625 #undef SPDK_CONFIG_PGO_CAPTURE 00:41:18.625 #define SPDK_CONFIG_PGO_DIR 00:41:18.625 #undef SPDK_CONFIG_PGO_USE 00:41:18.625 #define SPDK_CONFIG_PREFIX /usr/local 00:41:18.625 #undef SPDK_CONFIG_RAID5F 00:41:18.625 #undef SPDK_CONFIG_RBD 00:41:18.625 #define SPDK_CONFIG_RDMA 1 00:41:18.625 #define SPDK_CONFIG_RDMA_PROV verbs 00:41:18.625 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:41:18.625 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:41:18.625 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:41:18.625 #define SPDK_CONFIG_SHARED 1 00:41:18.625 #undef SPDK_CONFIG_SMA 00:41:18.625 #define SPDK_CONFIG_TESTS 1 00:41:18.625 #undef SPDK_CONFIG_TSAN 00:41:18.625 #define SPDK_CONFIG_UBLK 1 00:41:18.625 #define SPDK_CONFIG_UBSAN 1 00:41:18.625 #undef SPDK_CONFIG_UNIT_TESTS 00:41:18.625 #undef SPDK_CONFIG_URING 00:41:18.625 #define SPDK_CONFIG_URING_PATH 00:41:18.625 #undef SPDK_CONFIG_URING_ZNS 00:41:18.625 #undef SPDK_CONFIG_USDT 00:41:18.625 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:41:18.625 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:41:18.625 #undef SPDK_CONFIG_VFIO_USER 00:41:18.625 #define SPDK_CONFIG_VFIO_USER_DIR 00:41:18.625 #define SPDK_CONFIG_VHOST 1 00:41:18.625 #define SPDK_CONFIG_VIRTIO 1 00:41:18.625 #undef SPDK_CONFIG_VTUNE 00:41:18.625 #define SPDK_CONFIG_VTUNE_DIR 00:41:18.625 #define SPDK_CONFIG_WERROR 1 00:41:18.625 #define SPDK_CONFIG_WPDK_DIR 00:41:18.625 #define SPDK_CONFIG_XNVME 1 00:41:18.625 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:41:18.626 17:38:19 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:18.626 17:38:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.626 17:38:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.626 17:38:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.626 17:38:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.626 17:38:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.626 17:38:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.626 17:38:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.626 17:38:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:41:18.626 17:38:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@68 -- # uname -s 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:41:18.626 17:38:19 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:41:18.626 17:38:19 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:41:18.887 17:38:19 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70064 ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70064 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.E4U9aJ 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.E4U9aJ/tests/xnvme /tmp/spdk.E4U9aJ 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966790656 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601423360 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13966790656 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5601423360 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97198522368 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2504257536 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:41:18.888 * Looking for test storage... 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.888 17:38:19 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13966790656 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.889 --rc genhtml_branch_coverage=1 00:41:18.889 --rc genhtml_function_coverage=1 00:41:18.889 --rc genhtml_legend=1 00:41:18.889 --rc geninfo_all_blocks=1 00:41:18.889 --rc geninfo_unexecuted_blocks=1 00:41:18.889 00:41:18.889 ' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.889 --rc genhtml_branch_coverage=1 00:41:18.889 --rc genhtml_function_coverage=1 00:41:18.889 --rc genhtml_legend=1 00:41:18.889 --rc geninfo_all_blocks=1 00:41:18.889 --rc geninfo_unexecuted_blocks=1 00:41:18.889 00:41:18.889 ' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.889 --rc genhtml_branch_coverage=1 00:41:18.889 --rc genhtml_function_coverage=1 00:41:18.889 --rc genhtml_legend=1 00:41:18.889 --rc geninfo_all_blocks=1 00:41:18.889 --rc geninfo_unexecuted_blocks=1 00:41:18.889 00:41:18.889 ' 00:41:18.889 17:38:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:18.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:18.889 --rc genhtml_branch_coverage=1 00:41:18.889 --rc genhtml_function_coverage=1 00:41:18.889 --rc genhtml_legend=1 00:41:18.889 --rc geninfo_all_blocks=1 00:41:18.889 --rc geninfo_unexecuted_blocks=1 00:41:18.889 00:41:18.889 ' 00:41:18.889 17:38:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:18.889 17:38:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:18.889 17:38:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.889 17:38:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.889 17:38:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.889 17:38:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:41:18.889 17:38:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:41:18.889 17:38:19 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:19.458 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:19.718 Waiting for block devices as requested 00:41:19.977 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:19.977 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:19.977 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:41:20.235 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:41:25.508 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:41:25.508 17:38:25 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:41:25.768 17:38:26 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:41:25.768 17:38:26 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:41:26.027 17:38:26 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:41:26.027 No valid GPT data, bailing 00:41:26.027 17:38:26 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:41:26.027 17:38:26 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:41:26.027 17:38:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:41:26.027 17:38:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:26.027 17:38:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.027 17:38:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:26.027 ************************************ 00:41:26.027 START TEST xnvme_rpc 00:41:26.027 ************************************ 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70465 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:26.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70465 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70465 ']' 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.027 17:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:26.286 [2024-11-26 17:38:26.786718] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:26.286 [2024-11-26 17:38:26.787077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:41:26.286 [2024-11-26 17:38:26.969183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:26.545 [2024-11-26 17:38:27.074249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 xnvme_bdev 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70465 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70465 ']' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70465 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70465 00:41:27.484 killing process with pid 70465 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70465' 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70465 00:41:27.484 17:38:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70465 00:41:30.021 00:41:30.021 real 0m3.740s 00:41:30.021 user 0m3.758s 00:41:30.021 sys 0m0.579s 00:41:30.021 17:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:30.021 17:38:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:30.021 ************************************ 00:41:30.021 END TEST xnvme_rpc 00:41:30.021 ************************************ 00:41:30.021 17:38:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:41:30.021 17:38:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:30.021 17:38:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:30.021 17:38:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:30.021 ************************************ 00:41:30.021 START TEST xnvme_bdevperf 00:41:30.021 ************************************ 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:41:30.021 17:38:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:30.021 { 00:41:30.021 "subsystems": [ 00:41:30.021 { 00:41:30.021 "subsystem": "bdev", 00:41:30.021 "config": [ 00:41:30.021 { 00:41:30.021 "params": { 00:41:30.021 "io_mechanism": "libaio", 00:41:30.021 "conserve_cpu": false, 00:41:30.021 "filename": "/dev/nvme0n1", 00:41:30.021 "name": "xnvme_bdev" 00:41:30.021 }, 00:41:30.021 "method": "bdev_xnvme_create" 00:41:30.021 }, 00:41:30.021 { 00:41:30.021 "method": "bdev_wait_for_examine" 00:41:30.021 } 00:41:30.021 ] 00:41:30.021 } 00:41:30.021 ] 00:41:30.021 } 00:41:30.021 [2024-11-26 17:38:30.589687] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:30.021 [2024-11-26 17:38:30.590149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:41:30.280 [2024-11-26 17:38:30.778877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:30.280 [2024-11-26 17:38:30.886307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:30.848 Running I/O for 5 seconds... 00:41:32.723 31453.00 IOPS, 122.86 MiB/s [2024-11-26T17:38:34.400Z] 31421.50 IOPS, 122.74 MiB/s [2024-11-26T17:38:35.337Z] 31041.33 IOPS, 121.26 MiB/s [2024-11-26T17:38:36.275Z] 31126.25 IOPS, 121.59 MiB/s 00:41:35.581 Latency(us) 00:41:35.581 [2024-11-26T17:38:36.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.581 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:41:35.581 xnvme_bdev : 5.00 31315.80 122.33 0.00 0.00 2039.74 174.37 4000.59 00:41:35.581 [2024-11-26T17:38:36.275Z] =================================================================================================================== 00:41:35.581 [2024-11-26T17:38:36.275Z] Total : 31315.80 122.33 0.00 0.00 2039.74 174.37 4000.59 00:41:36.958 17:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:41:36.958 17:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:41:36.958 17:38:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:41:36.958 17:38:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:41:36.958 17:38:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:36.958 { 00:41:36.958 "subsystems": [ 00:41:36.958 { 00:41:36.958 "subsystem": "bdev", 00:41:36.958 "config": [ 00:41:36.958 { 00:41:36.958 "params": { 00:41:36.958 "io_mechanism": "libaio", 00:41:36.958 "conserve_cpu": false, 00:41:36.958 "filename": "/dev/nvme0n1", 00:41:36.958 "name": "xnvme_bdev" 00:41:36.958 }, 00:41:36.958 "method": "bdev_xnvme_create" 00:41:36.958 }, 00:41:36.958 { 00:41:36.958 "method": "bdev_wait_for_examine" 00:41:36.958 } 00:41:36.958 ] 00:41:36.958 } 00:41:36.958 ] 00:41:36.958 } 00:41:36.958 [2024-11-26 17:38:37.424169] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:36.958 [2024-11-26 17:38:37.424301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:41:36.958 [2024-11-26 17:38:37.591431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:37.216 [2024-11-26 17:38:37.698188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.474 Running I/O for 5 seconds... 00:41:39.782 35103.00 IOPS, 137.12 MiB/s [2024-11-26T17:38:41.412Z] 39794.00 IOPS, 155.45 MiB/s [2024-11-26T17:38:42.348Z] 42425.33 IOPS, 165.72 MiB/s [2024-11-26T17:38:43.285Z] 43510.50 IOPS, 169.96 MiB/s 00:41:42.591 Latency(us) 00:41:42.591 [2024-11-26T17:38:43.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:42.591 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:41:42.591 xnvme_bdev : 5.00 44054.37 172.09 0.00 0.00 1449.80 158.74 7053.67 00:41:42.591 [2024-11-26T17:38:43.285Z] =================================================================================================================== 00:41:42.591 [2024-11-26T17:38:43.285Z] Total : 44054.37 172.09 0.00 0.00 1449.80 158.74 7053.67 00:41:43.527 00:41:43.527 real 0m13.676s 00:41:43.527 user 0m4.639s 00:41:43.527 sys 0m6.600s 00:41:43.527 17:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:43.527 17:38:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:43.527 ************************************ 00:41:43.527 END TEST xnvme_bdevperf 00:41:43.527 ************************************ 00:41:43.527 17:38:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:41:43.527 17:38:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:43.527 17:38:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:43.527 17:38:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:43.787 ************************************ 00:41:43.787 START TEST xnvme_fio_plugin 00:41:43.787 ************************************ 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:43.787 17:38:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:43.787 { 00:41:43.787 "subsystems": [ 00:41:43.787 { 00:41:43.787 "subsystem": "bdev", 00:41:43.787 "config": [ 00:41:43.787 { 00:41:43.787 "params": { 00:41:43.787 "io_mechanism": "libaio", 00:41:43.787 "conserve_cpu": false, 00:41:43.787 "filename": "/dev/nvme0n1", 00:41:43.787 "name": "xnvme_bdev" 00:41:43.787 }, 00:41:43.787 "method": "bdev_xnvme_create" 00:41:43.787 }, 00:41:43.787 { 00:41:43.787 "method": "bdev_wait_for_examine" 00:41:43.787 } 00:41:43.787 ] 00:41:43.787 } 00:41:43.787 ] 00:41:43.787 } 00:41:44.047 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:41:44.047 fio-3.35 00:41:44.047 Starting 1 thread 00:41:50.612 00:41:50.612 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70750: Tue Nov 26 17:38:50 2024 00:41:50.612 read: IOPS=41.8k, BW=163MiB/s (171MB/s)(816MiB/5001msec) 00:41:50.612 slat (usec): min=4, max=1040, avg=20.88, stdev=29.36 00:41:50.612 clat (usec): min=63, max=5984, avg=898.98, stdev=573.35 00:41:50.612 lat (usec): min=141, max=6085, avg=919.86, stdev=577.96 00:41:50.612 clat percentiles (usec): 00:41:50.612 | 1.00th=[ 188], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 465], 00:41:50.612 | 30.00th=[ 586], 40.00th=[ 701], 50.00th=[ 816], 60.00th=[ 930], 00:41:50.612 | 70.00th=[ 1057], 80.00th=[ 1188], 90.00th=[ 1418], 95.00th=[ 1795], 00:41:50.612 | 99.00th=[ 3359], 99.50th=[ 3916], 99.90th=[ 4686], 99.95th=[ 4948], 00:41:50.612 | 99.99th=[ 5211] 00:41:50.612 bw ( KiB/s): min=155416, max=177776, per=100.00%, avg=168387.00, stdev=8252.39, samples=9 00:41:50.612 iops : min=38854, max=44444, avg=42096.67, stdev=2063.12, samples=9 00:41:50.612 lat (usec) : 100=0.05%, 250=3.99%, 500=18.82%, 750=21.44%, 1000=21.69% 00:41:50.612 lat (msec) : 2=29.98%, 4=3.59%, 10=0.44% 00:41:50.612 cpu : usr=25.26%, sys=55.52%, ctx=76, majf=0, minf=764 00:41:50.612 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.6%, 16=26.6%, 32=56.4%, >=64=1.8% 00:41:50.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.612 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:41:50.612 issued rwts: total=208924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.612 latency : target=0, window=0, percentile=100.00%, depth=64 00:41:50.612 00:41:50.612 Run status group 0 (all jobs): 00:41:50.612 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=816MiB (856MB), run=5001-5001msec 00:41:51.180 ----------------------------------------------------- 00:41:51.181 Suppressions used: 00:41:51.181 count bytes template 00:41:51.181 1 11 /usr/src/fio/parse.c 00:41:51.181 1 8 libtcmalloc_minimal.so 00:41:51.181 1 904 libcrypto.so 00:41:51.181 ----------------------------------------------------- 00:41:51.181 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:41:51.181 17:38:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:41:51.181 { 00:41:51.181 "subsystems": [ 00:41:51.181 { 00:41:51.181 "subsystem": "bdev", 00:41:51.181 "config": [ 00:41:51.181 { 00:41:51.181 "params": { 00:41:51.181 "io_mechanism": "libaio", 00:41:51.181 "conserve_cpu": false, 00:41:51.181 "filename": "/dev/nvme0n1", 00:41:51.181 "name": "xnvme_bdev" 00:41:51.181 }, 00:41:51.181 "method": "bdev_xnvme_create" 00:41:51.181 }, 00:41:51.181 { 00:41:51.181 "method": "bdev_wait_for_examine" 00:41:51.181 } 00:41:51.181 ] 00:41:51.181 } 00:41:51.181 ] 00:41:51.181 } 00:41:51.440 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:41:51.440 fio-3.35 00:41:51.440 Starting 1 thread 00:41:58.014 00:41:58.014 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70842: Tue Nov 26 17:38:57 2024 00:41:58.014 write: IOPS=42.6k, BW=167MiB/s (175MB/s)(833MiB/5001msec); 0 zone resets 00:41:58.014 slat (usec): min=4, max=787, avg=20.65, stdev=31.80 00:41:58.014 clat (usec): min=71, max=5941, avg=873.27, stdev=511.86 00:41:58.014 lat (usec): min=78, max=6061, avg=893.91, stdev=514.32 00:41:58.014 clat percentiles (usec): 00:41:58.014 | 1.00th=[ 184], 5.00th=[ 269], 10.00th=[ 330], 20.00th=[ 457], 00:41:58.014 | 30.00th=[ 578], 40.00th=[ 693], 50.00th=[ 816], 60.00th=[ 930], 00:41:58.015 | 70.00th=[ 1057], 80.00th=[ 1188], 90.00th=[ 1385], 95.00th=[ 1614], 00:41:58.015 | 99.00th=[ 2966], 99.50th=[ 3458], 99.90th=[ 4359], 99.95th=[ 4555], 00:41:58.015 | 99.99th=[ 5145] 00:41:58.015 bw ( KiB/s): min=149520, max=193960, per=100.00%, avg=172640.00, stdev=13243.17, samples=9 00:41:58.015 iops : min=37380, max=48490, avg=43160.00, stdev=3310.79, samples=9 00:41:58.015 lat (usec) : 100=0.09%, 250=3.66%, 500=19.79%, 750=21.27%, 1000=21.17% 00:41:58.015 lat (msec) : 2=31.27%, 4=2.56%, 10=0.21% 00:41:58.015 cpu : usr=24.80%, sys=58.40%, ctx=36, majf=0, minf=765 00:41:58.015 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=11.2%, 16=26.8%, 32=55.5%, >=64=1.7% 00:41:58.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.015 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:41:58.015 issued rwts: total=0,213230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.015 latency : target=0, window=0, percentile=100.00%, depth=64 00:41:58.015 00:41:58.015 Run status group 0 (all jobs): 00:41:58.015 WRITE: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=833MiB (873MB), run=5001-5001msec 00:41:58.274 ----------------------------------------------------- 00:41:58.274 Suppressions used: 00:41:58.274 count bytes template 00:41:58.274 1 11 /usr/src/fio/parse.c 00:41:58.274 1 8 libtcmalloc_minimal.so 00:41:58.274 1 904 libcrypto.so 00:41:58.274 ----------------------------------------------------- 00:41:58.274 00:41:58.534 ************************************ 00:41:58.534 END TEST xnvme_fio_plugin 00:41:58.534 ************************************ 00:41:58.534 00:41:58.534 real 0m14.765s 00:41:58.534 user 0m6.142s 00:41:58.534 sys 0m6.504s 00:41:58.534 17:38:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:58.534 17:38:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:41:58.534 17:38:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:41:58.534 17:38:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:41:58.534 17:38:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:41:58.534 17:38:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:41:58.534 17:38:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:58.534 17:38:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:58.534 17:38:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:41:58.534 ************************************ 00:41:58.534 START TEST xnvme_rpc 00:41:58.534 ************************************ 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70928 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70928 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:58.534 17:38:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:58.534 [2024-11-26 17:38:59.187940] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:41:58.534 [2024-11-26 17:38:59.188294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:41:58.793 [2024-11-26 17:38:59.366207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.793 [2024-11-26 17:38:59.471671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.730 xnvme_bdev 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:41:59.730 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:59.731 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.731 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.731 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70928 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70928 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:41:59.990 killing process with pid 70928 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70928 00:41:59.990 17:39:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70928 00:42:02.528 00:42:02.528 real 0m3.814s 00:42:02.528 user 0m3.827s 00:42:02.528 sys 0m0.550s 00:42:02.528 17:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.528 17:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:02.528 ************************************ 00:42:02.528 END TEST xnvme_rpc 00:42:02.528 ************************************ 00:42:02.528 17:39:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:42:02.528 17:39:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:02.528 17:39:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.528 17:39:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:02.528 ************************************ 00:42:02.528 START TEST xnvme_bdevperf 00:42:02.528 ************************************ 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:42:02.528 17:39:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:02.528 { 00:42:02.528 "subsystems": [ 00:42:02.528 { 00:42:02.528 "subsystem": "bdev", 00:42:02.528 "config": [ 00:42:02.528 { 00:42:02.528 "params": { 00:42:02.528 "io_mechanism": "libaio", 00:42:02.528 "conserve_cpu": true, 00:42:02.528 "filename": "/dev/nvme0n1", 00:42:02.528 "name": "xnvme_bdev" 00:42:02.528 }, 00:42:02.528 "method": "bdev_xnvme_create" 00:42:02.528 }, 00:42:02.528 { 00:42:02.528 "method": "bdev_wait_for_examine" 00:42:02.528 } 00:42:02.528 ] 00:42:02.528 } 00:42:02.528 ] 00:42:02.528 } 00:42:02.528 [2024-11-26 17:39:03.069011] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:02.528 [2024-11-26 17:39:03.069126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71012 ] 00:42:02.788 [2024-11-26 17:39:03.247175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.788 [2024-11-26 17:39:03.356430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:03.047 Running I/O for 5 seconds... 00:42:05.361 47245.00 IOPS, 184.55 MiB/s [2024-11-26T17:39:06.991Z] 47870.50 IOPS, 186.99 MiB/s [2024-11-26T17:39:07.929Z] 48324.67 IOPS, 188.77 MiB/s [2024-11-26T17:39:08.866Z] 48735.75 IOPS, 190.37 MiB/s 00:42:08.172 Latency(us) 00:42:08.172 [2024-11-26T17:39:08.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:08.172 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:42:08.172 xnvme_bdev : 5.00 48749.27 190.43 0.00 0.00 1310.09 167.79 5369.21 00:42:08.172 [2024-11-26T17:39:08.866Z] =================================================================================================================== 00:42:08.172 [2024-11-26T17:39:08.866Z] Total : 48749.27 190.43 0.00 0.00 1310.09 167.79 5369.21 00:42:09.130 17:39:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:09.130 17:39:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:42:09.390 17:39:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:42:09.390 17:39:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:42:09.390 17:39:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:09.390 { 00:42:09.390 "subsystems": [ 00:42:09.390 { 00:42:09.390 "subsystem": "bdev", 00:42:09.390 "config": [ 00:42:09.390 { 00:42:09.390 "params": { 00:42:09.390 "io_mechanism": "libaio", 00:42:09.390 "conserve_cpu": true, 00:42:09.390 "filename": "/dev/nvme0n1", 00:42:09.390 "name": "xnvme_bdev" 00:42:09.390 }, 00:42:09.390 "method": "bdev_xnvme_create" 00:42:09.390 }, 00:42:09.390 { 00:42:09.390 "method": "bdev_wait_for_examine" 00:42:09.390 } 00:42:09.390 ] 00:42:09.390 } 00:42:09.390 ] 00:42:09.390 } 00:42:09.390 [2024-11-26 17:39:09.914634] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:09.390 [2024-11-26 17:39:09.914743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:42:09.649 [2024-11-26 17:39:10.094010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.649 [2024-11-26 17:39:10.199944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.908 Running I/O for 5 seconds... 00:42:11.827 49933.00 IOPS, 195.05 MiB/s [2024-11-26T17:39:13.899Z] 49305.00 IOPS, 192.60 MiB/s [2024-11-26T17:39:14.836Z] 49316.00 IOPS, 192.64 MiB/s [2024-11-26T17:39:15.775Z] 48641.25 IOPS, 190.00 MiB/s [2024-11-26T17:39:15.775Z] 47886.60 IOPS, 187.06 MiB/s 00:42:15.081 Latency(us) 00:42:15.081 [2024-11-26T17:39:15.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:15.081 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:42:15.081 xnvme_bdev : 5.00 47861.26 186.96 0.00 0.00 1334.26 360.25 4790.18 00:42:15.081 [2024-11-26T17:39:15.775Z] =================================================================================================================== 00:42:15.081 [2024-11-26T17:39:15.775Z] Total : 47861.26 186.96 0.00 0.00 1334.26 360.25 4790.18 00:42:16.016 00:42:16.016 real 0m13.638s 00:42:16.016 user 0m5.118s 00:42:16.016 sys 0m6.738s 00:42:16.016 17:39:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.016 17:39:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:16.016 ************************************ 00:42:16.016 END TEST xnvme_bdevperf 00:42:16.016 ************************************ 00:42:16.016 17:39:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:42:16.016 17:39:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:16.016 17:39:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:16.016 17:39:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:16.016 ************************************ 00:42:16.016 START TEST xnvme_fio_plugin 00:42:16.016 ************************************ 00:42:16.016 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:42:16.016 17:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:42:16.016 17:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:42:16.016 17:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:16.016 17:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:42:16.017 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:16.276 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:16.276 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:16.276 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:42:16.276 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:16.276 17:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:16.276 { 00:42:16.276 "subsystems": [ 00:42:16.276 { 00:42:16.276 "subsystem": "bdev", 00:42:16.276 "config": [ 00:42:16.276 { 00:42:16.276 "params": { 00:42:16.276 "io_mechanism": "libaio", 00:42:16.276 "conserve_cpu": true, 00:42:16.276 "filename": "/dev/nvme0n1", 00:42:16.276 "name": "xnvme_bdev" 00:42:16.276 }, 00:42:16.276 "method": "bdev_xnvme_create" 00:42:16.276 }, 00:42:16.276 { 00:42:16.276 "method": "bdev_wait_for_examine" 00:42:16.276 } 00:42:16.276 ] 00:42:16.276 } 00:42:16.276 ] 00:42:16.276 } 00:42:16.276 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:42:16.276 fio-3.35 00:42:16.276 Starting 1 thread 00:42:22.840 00:42:22.840 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71213: Tue Nov 26 17:39:22 2024 00:42:22.840 read: IOPS=46.9k, BW=183MiB/s (192MB/s)(916MiB/5001msec) 00:42:22.840 slat (usec): min=4, max=3768, avg=18.39, stdev=35.95 00:42:22.840 clat (usec): min=33, max=7944, avg=822.99, stdev=453.14 00:42:22.840 lat (usec): min=77, max=7949, avg=841.38, stdev=453.76 00:42:22.840 clat percentiles (usec): 00:42:22.840 | 1.00th=[ 182], 5.00th=[ 262], 10.00th=[ 334], 20.00th=[ 457], 00:42:22.840 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 889], 00:42:22.840 | 70.00th=[ 996], 80.00th=[ 1123], 90.00th=[ 1287], 95.00th=[ 1450], 00:42:22.840 | 99.00th=[ 2507], 99.50th=[ 3195], 99.90th=[ 4359], 99.95th=[ 4686], 00:42:22.840 | 99.99th=[ 5211] 00:42:22.840 bw ( KiB/s): min=167640, max=217472, per=99.72%, avg=186984.00, stdev=14704.31, samples=9 00:42:22.840 iops : min=41910, max=54368, avg=46746.00, stdev=3676.08, samples=9 00:42:22.840 lat (usec) : 50=0.01%, 100=0.10%, 250=4.13%, 500=19.77%, 750=23.10% 00:42:22.840 lat (usec) : 1000=23.40% 00:42:22.840 lat (msec) : 2=27.92%, 4=1.39%, 10=0.19% 00:42:22.840 cpu : usr=25.38%, sys=59.80%, ctx=55, majf=0, minf=764 00:42:22.840 IO depths : 1=0.2%, 2=0.9%, 4=3.8%, 8=10.8%, 16=25.8%, 32=56.7%, >=64=1.8% 00:42:22.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:22.840 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:42:22.840 issued rwts: total=234436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:22.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:42:22.840 00:42:22.840 Run status group 0 (all jobs): 00:42:22.840 READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=916MiB (960MB), run=5001-5001msec 00:42:23.408 ----------------------------------------------------- 00:42:23.408 Suppressions used: 00:42:23.408 count bytes template 00:42:23.408 1 11 /usr/src/fio/parse.c 00:42:23.408 1 8 libtcmalloc_minimal.so 00:42:23.408 1 904 libcrypto.so 00:42:23.408 ----------------------------------------------------- 00:42:23.408 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:23.408 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:23.409 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:42:23.409 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:23.409 { 00:42:23.409 "subsystems": [ 00:42:23.409 { 00:42:23.409 "subsystem": "bdev", 00:42:23.409 "config": [ 00:42:23.409 { 00:42:23.409 "params": { 00:42:23.409 "io_mechanism": "libaio", 00:42:23.409 "conserve_cpu": true, 00:42:23.409 "filename": "/dev/nvme0n1", 00:42:23.409 "name": "xnvme_bdev" 00:42:23.409 }, 00:42:23.409 "method": "bdev_xnvme_create" 00:42:23.409 }, 00:42:23.409 { 00:42:23.409 "method": "bdev_wait_for_examine" 00:42:23.409 } 00:42:23.409 ] 00:42:23.409 } 00:42:23.409 ] 00:42:23.409 } 00:42:23.409 17:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:23.668 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:42:23.668 fio-3.35 00:42:23.668 Starting 1 thread 00:42:30.235 00:42:30.235 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71305: Tue Nov 26 17:39:30 2024 00:42:30.235 write: IOPS=43.3k, BW=169MiB/s (177MB/s)(846MiB/5001msec); 0 zone resets 00:42:30.235 slat (usec): min=4, max=1135, avg=20.19, stdev=34.79 00:42:30.235 clat (usec): min=85, max=5385, avg=878.60, stdev=489.37 00:42:30.235 lat (usec): min=147, max=5456, avg=898.79, stdev=491.05 00:42:30.235 clat percentiles (usec): 00:42:30.235 | 1.00th=[ 194], 5.00th=[ 277], 10.00th=[ 347], 20.00th=[ 478], 00:42:30.235 | 30.00th=[ 594], 40.00th=[ 717], 50.00th=[ 832], 60.00th=[ 947], 00:42:30.235 | 70.00th=[ 1057], 80.00th=[ 1188], 90.00th=[ 1369], 95.00th=[ 1549], 00:42:30.235 | 99.00th=[ 2835], 99.50th=[ 3425], 99.90th=[ 4293], 99.95th=[ 4555], 00:42:30.235 | 99.99th=[ 4883] 00:42:30.235 bw ( KiB/s): min=155552, max=201592, per=100.00%, avg=174410.22, stdev=14255.94, samples=9 00:42:30.235 iops : min=38888, max=50398, avg=43602.56, stdev=3563.99, samples=9 00:42:30.235 lat (usec) : 100=0.07%, 250=3.35%, 500=18.41%, 750=21.10%, 1000=21.62% 00:42:30.235 lat (msec) : 2=33.19%, 4=2.04%, 10=0.22% 00:42:30.235 cpu : usr=24.98%, sys=58.94%, ctx=25, majf=0, minf=765 00:42:30.235 IO depths : 1=0.1%, 2=0.8%, 4=3.7%, 8=11.2%, 16=26.1%, 32=56.2%, >=64=1.8% 00:42:30.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:30.235 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:42:30.235 issued rwts: total=0,216508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:30.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:42:30.235 00:42:30.235 Run status group 0 (all jobs): 00:42:30.235 WRITE: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=846MiB (887MB), run=5001-5001msec 00:42:30.805 ----------------------------------------------------- 00:42:30.805 Suppressions used: 00:42:30.805 count bytes template 00:42:30.805 1 11 /usr/src/fio/parse.c 00:42:30.805 1 8 libtcmalloc_minimal.so 00:42:30.805 1 904 libcrypto.so 00:42:30.805 ----------------------------------------------------- 00:42:30.805 00:42:30.805 00:42:30.805 real 0m14.751s 00:42:30.805 user 0m6.165s 00:42:30.805 sys 0m6.741s 00:42:30.805 ************************************ 00:42:30.805 END TEST xnvme_fio_plugin 00:42:30.805 ************************************ 00:42:30.805 17:39:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:30.805 17:39:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:42:30.805 17:39:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:42:30.805 17:39:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:30.805 17:39:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:30.805 17:39:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:31.065 ************************************ 00:42:31.065 START TEST xnvme_rpc 00:42:31.065 ************************************ 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71391 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71391 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71391 ']' 00:42:31.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.065 17:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:31.065 [2024-11-26 17:39:31.632080] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:31.065 [2024-11-26 17:39:31.632212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71391 ] 00:42:31.324 [2024-11-26 17:39:31.819397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:31.324 [2024-11-26 17:39:31.934013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.262 xnvme_bdev 00:42:32.262 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.263 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.522 17:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71391 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71391 ']' 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71391 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71391 00:42:32.522 killing process with pid 71391 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71391' 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71391 00:42:32.522 17:39:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71391 00:42:35.056 ************************************ 00:42:35.056 END TEST xnvme_rpc 00:42:35.056 ************************************ 00:42:35.056 00:42:35.056 real 0m3.980s 00:42:35.056 user 0m3.997s 00:42:35.056 sys 0m0.594s 00:42:35.056 17:39:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:35.056 17:39:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:42:35.056 17:39:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:42:35.056 17:39:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:35.056 17:39:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:35.056 17:39:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:35.056 ************************************ 00:42:35.056 START TEST xnvme_bdevperf 00:42:35.056 ************************************ 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:42:35.056 17:39:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:35.056 { 00:42:35.056 "subsystems": [ 00:42:35.056 { 00:42:35.056 "subsystem": "bdev", 00:42:35.056 "config": [ 00:42:35.056 { 00:42:35.056 "params": { 00:42:35.056 "io_mechanism": "io_uring", 00:42:35.056 "conserve_cpu": false, 00:42:35.056 "filename": "/dev/nvme0n1", 00:42:35.056 "name": "xnvme_bdev" 00:42:35.056 }, 00:42:35.056 "method": "bdev_xnvme_create" 00:42:35.056 }, 00:42:35.056 { 00:42:35.056 "method": "bdev_wait_for_examine" 00:42:35.056 } 00:42:35.056 ] 00:42:35.056 } 00:42:35.056 ] 00:42:35.056 } 00:42:35.056 [2024-11-26 17:39:35.659259] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:35.056 [2024-11-26 17:39:35.659391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71476 ] 00:42:35.315 [2024-11-26 17:39:35.840105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.315 [2024-11-26 17:39:35.953182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.894 Running I/O for 5 seconds... 00:42:37.763 40345.00 IOPS, 157.60 MiB/s [2024-11-26T17:39:39.402Z] 35087.50 IOPS, 137.06 MiB/s [2024-11-26T17:39:40.341Z] 31309.33 IOPS, 122.30 MiB/s [2024-11-26T17:39:41.719Z] 29412.50 IOPS, 114.89 MiB/s [2024-11-26T17:39:41.719Z] 28870.20 IOPS, 112.77 MiB/s 00:42:41.025 Latency(us) 00:42:41.025 [2024-11-26T17:39:41.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:41.025 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:42:41.025 xnvme_bdev : 5.01 28833.04 112.63 0.00 0.00 2213.05 371.77 8632.85 00:42:41.025 [2024-11-26T17:39:41.719Z] =================================================================================================================== 00:42:41.025 [2024-11-26T17:39:41.719Z] Total : 28833.04 112.63 0.00 0.00 2213.05 371.77 8632.85 00:42:41.963 17:39:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:41.963 17:39:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:42:41.963 17:39:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:42:41.963 17:39:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:42:41.963 17:39:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:41.963 { 00:42:41.963 "subsystems": [ 00:42:41.963 { 00:42:41.963 "subsystem": "bdev", 00:42:41.963 "config": [ 00:42:41.963 { 00:42:41.963 "params": { 00:42:41.963 "io_mechanism": "io_uring", 00:42:41.963 "conserve_cpu": false, 00:42:41.963 "filename": "/dev/nvme0n1", 00:42:41.963 "name": "xnvme_bdev" 00:42:41.963 }, 00:42:41.963 "method": "bdev_xnvme_create" 00:42:41.963 }, 00:42:41.963 { 00:42:41.963 "method": "bdev_wait_for_examine" 00:42:41.963 } 00:42:41.963 ] 00:42:41.963 } 00:42:41.963 ] 00:42:41.963 } 00:42:41.963 [2024-11-26 17:39:42.541293] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:42:41.963 [2024-11-26 17:39:42.541675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71557 ] 00:42:42.221 [2024-11-26 17:39:42.721910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.221 [2024-11-26 17:39:42.834461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.787 Running I/O for 5 seconds... 00:42:44.659 24320.00 IOPS, 95.00 MiB/s [2024-11-26T17:39:46.316Z] 23904.00 IOPS, 93.38 MiB/s [2024-11-26T17:39:47.253Z] 23680.00 IOPS, 92.50 MiB/s [2024-11-26T17:39:48.188Z] 23616.00 IOPS, 92.25 MiB/s 00:42:47.494 Latency(us) 00:42:47.494 [2024-11-26T17:39:48.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.494 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:42:47.494 xnvme_bdev : 5.00 23517.27 91.86 0.00 0.00 2712.36 1598.92 7843.26 00:42:47.494 [2024-11-26T17:39:48.188Z] =================================================================================================================== 00:42:47.494 [2024-11-26T17:39:48.188Z] Total : 23517.27 91.86 0.00 0.00 2712.36 1598.92 7843.26 00:42:48.870 00:42:48.870 real 0m13.758s 00:42:48.870 user 0m7.047s 00:42:48.870 sys 0m6.457s 00:42:48.870 17:39:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:48.870 ************************************ 00:42:48.870 END TEST xnvme_bdevperf 00:42:48.870 ************************************ 00:42:48.870 17:39:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:42:48.870 17:39:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:42:48.870 17:39:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:48.870 17:39:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:48.870 17:39:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:42:48.870 ************************************ 00:42:48.870 START TEST xnvme_fio_plugin 00:42:48.870 ************************************ 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:42:48.870 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:48.871 { 00:42:48.871 "subsystems": [ 00:42:48.871 { 00:42:48.871 "subsystem": "bdev", 00:42:48.871 "config": [ 00:42:48.871 { 00:42:48.871 "params": { 00:42:48.871 "io_mechanism": "io_uring", 00:42:48.871 "conserve_cpu": false, 00:42:48.871 "filename": "/dev/nvme0n1", 00:42:48.871 "name": "xnvme_bdev" 00:42:48.871 }, 00:42:48.871 "method": "bdev_xnvme_create" 00:42:48.871 }, 00:42:48.871 { 00:42:48.871 "method": "bdev_wait_for_examine" 00:42:48.871 } 00:42:48.871 ] 00:42:48.871 } 00:42:48.871 ] 00:42:48.871 } 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:48.871 17:39:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:49.129 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:42:49.129 fio-3.35 00:42:49.129 Starting 1 thread 00:42:55.692 00:42:55.692 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71682: Tue Nov 26 17:39:55 2024 00:42:55.692 read: IOPS=29.0k, BW=113MiB/s (119MB/s)(566MiB/5001msec) 00:42:55.692 slat (usec): min=4, max=2890, avg= 5.96, stdev= 7.81 00:42:55.692 clat (usec): min=1358, max=5058, avg=1976.52, stdev=228.09 00:42:55.692 lat (usec): min=1363, max=5066, avg=1982.48, stdev=228.96 00:42:55.692 clat percentiles (usec): 00:42:55.692 | 1.00th=[ 1565], 5.00th=[ 1647], 10.00th=[ 1713], 20.00th=[ 1778], 00:42:55.692 | 30.00th=[ 1844], 40.00th=[ 1893], 50.00th=[ 1958], 60.00th=[ 2008], 00:42:55.692 | 70.00th=[ 2073], 80.00th=[ 2147], 90.00th=[ 2278], 95.00th=[ 2376], 00:42:55.692 | 99.00th=[ 2606], 99.50th=[ 2671], 99.90th=[ 2802], 99.95th=[ 2900], 00:42:55.692 | 99.99th=[ 4948] 00:42:55.692 bw ( KiB/s): min=108544, max=122368, per=99.64%, avg=115427.56, stdev=5107.90, samples=9 00:42:55.692 iops : min=27136, max=30592, avg=28856.89, stdev=1276.97, samples=9 00:42:55.692 lat (msec) : 2=58.82%, 4=41.17%, 10=0.01% 00:42:55.692 cpu : usr=31.94%, sys=67.00%, ctx=14, majf=0, minf=762 00:42:55.692 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:42:55.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.692 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:42:55.692 issued rwts: total=144832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:55.692 latency : target=0, window=0, percentile=100.00%, depth=64 00:42:55.692 00:42:55.692 Run status group 0 (all jobs): 00:42:55.692 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=566MiB (593MB), run=5001-5001msec 00:42:56.261 ----------------------------------------------------- 00:42:56.261 Suppressions used: 00:42:56.261 count bytes template 00:42:56.261 1 11 /usr/src/fio/parse.c 00:42:56.261 1 8 libtcmalloc_minimal.so 00:42:56.261 1 904 libcrypto.so 00:42:56.261 ----------------------------------------------------- 00:42:56.261 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:56.261 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:56.262 { 00:42:56.262 "subsystems": [ 00:42:56.262 { 00:42:56.262 "subsystem": "bdev", 00:42:56.262 "config": [ 00:42:56.262 { 00:42:56.262 "params": { 00:42:56.262 "io_mechanism": "io_uring", 00:42:56.262 "conserve_cpu": false, 00:42:56.262 "filename": "/dev/nvme0n1", 00:42:56.262 "name": "xnvme_bdev" 00:42:56.262 }, 00:42:56.262 "method": "bdev_xnvme_create" 00:42:56.262 }, 00:42:56.262 { 00:42:56.262 "method": "bdev_wait_for_examine" 00:42:56.262 } 00:42:56.262 ] 00:42:56.262 } 00:42:56.262 ] 00:42:56.262 } 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:42:56.262 17:39:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:42:56.521 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:42:56.521 fio-3.35 00:42:56.521 Starting 1 thread 00:43:03.088 00:43:03.088 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71774: Tue Nov 26 17:40:02 2024 00:43:03.088 write: IOPS=31.3k, BW=122MiB/s (128MB/s)(611MiB/5001msec); 0 zone resets 00:43:03.088 slat (usec): min=2, max=135, avg= 5.49, stdev= 1.93 00:43:03.088 clat (usec): min=1130, max=6458, avg=1831.54, stdev=278.11 00:43:03.088 lat (usec): min=1133, max=6464, avg=1837.03, stdev=278.94 00:43:03.088 clat percentiles (usec): 00:43:03.088 | 1.00th=[ 1369], 5.00th=[ 1467], 10.00th=[ 1532], 20.00th=[ 1598], 00:43:03.089 | 30.00th=[ 1663], 40.00th=[ 1729], 50.00th=[ 1778], 60.00th=[ 1844], 00:43:03.089 | 70.00th=[ 1926], 80.00th=[ 2040], 90.00th=[ 2212], 95.00th=[ 2376], 00:43:03.089 | 99.00th=[ 2638], 99.50th=[ 2737], 99.90th=[ 3064], 99.95th=[ 3458], 00:43:03.089 | 99.99th=[ 3785] 00:43:03.089 bw ( KiB/s): min=112632, max=140288, per=99.51%, avg=124445.56, stdev=9915.05, samples=9 00:43:03.089 iops : min=28158, max=35072, avg=31111.33, stdev=2478.80, samples=9 00:43:03.089 lat (msec) : 2=76.76%, 4=23.24%, 10=0.01% 00:43:03.089 cpu : usr=31.90%, sys=67.08%, ctx=15, majf=0, minf=763 00:43:03.089 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:43:03.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.089 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:43:03.089 issued rwts: total=0,156351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:03.089 00:43:03.089 Run status group 0 (all jobs): 00:43:03.089 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=611MiB (640MB), run=5001-5001msec 00:43:03.658 ----------------------------------------------------- 00:43:03.658 Suppressions used: 00:43:03.658 count bytes template 00:43:03.658 1 11 /usr/src/fio/parse.c 00:43:03.658 1 8 libtcmalloc_minimal.so 00:43:03.658 1 904 libcrypto.so 00:43:03.658 ----------------------------------------------------- 00:43:03.658 00:43:03.658 ************************************ 00:43:03.658 END TEST xnvme_fio_plugin 00:43:03.658 ************************************ 00:43:03.658 00:43:03.658 real 0m14.868s 00:43:03.658 user 0m6.990s 00:43:03.658 sys 0m7.513s 00:43:03.658 17:40:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:03.658 17:40:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:43:03.658 17:40:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:43:03.658 17:40:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:43:03.658 17:40:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:43:03.658 17:40:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:43:03.658 17:40:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:03.658 17:40:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:03.658 17:40:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:43:03.658 ************************************ 00:43:03.658 START TEST xnvme_rpc 00:43:03.658 ************************************ 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71860 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71860 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71860 ']' 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:03.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:03.658 17:40:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:03.959 [2024-11-26 17:40:04.457995] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:03.959 [2024-11-26 17:40:04.458976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71860 ] 00:43:04.246 [2024-11-26 17:40:04.665997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.246 [2024-11-26 17:40:04.774370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.183 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:05.183 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:43:05.183 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:43:05.183 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.183 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.183 xnvme_bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71860 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71860 ']' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71860 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71860 00:43:05.184 killing process with pid 71860 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71860' 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71860 00:43:05.184 17:40:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71860 00:43:07.721 ************************************ 00:43:07.721 END TEST xnvme_rpc 00:43:07.721 ************************************ 00:43:07.721 00:43:07.721 real 0m3.889s 00:43:07.721 user 0m3.906s 00:43:07.721 sys 0m0.596s 00:43:07.721 17:40:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.721 17:40:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:07.721 17:40:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:43:07.721 17:40:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:07.721 17:40:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:07.721 17:40:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:43:07.721 ************************************ 00:43:07.721 START TEST xnvme_bdevperf 00:43:07.721 ************************************ 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:43:07.721 17:40:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:07.721 { 00:43:07.721 "subsystems": [ 00:43:07.721 { 00:43:07.721 "subsystem": "bdev", 00:43:07.721 "config": [ 00:43:07.721 { 00:43:07.721 "params": { 00:43:07.721 "io_mechanism": "io_uring", 00:43:07.721 "conserve_cpu": true, 00:43:07.721 "filename": "/dev/nvme0n1", 00:43:07.721 "name": "xnvme_bdev" 00:43:07.721 }, 00:43:07.721 "method": "bdev_xnvme_create" 00:43:07.721 }, 00:43:07.721 { 00:43:07.721 "method": "bdev_wait_for_examine" 00:43:07.721 } 00:43:07.721 ] 00:43:07.721 } 00:43:07.721 ] 00:43:07.721 } 00:43:07.721 [2024-11-26 17:40:08.398662] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:07.721 [2024-11-26 17:40:08.398796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71945 ] 00:43:07.980 [2024-11-26 17:40:08.581707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.240 [2024-11-26 17:40:08.689911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.500 Running I/O for 5 seconds... 00:43:10.372 32192.00 IOPS, 125.75 MiB/s [2024-11-26T17:40:12.445Z] 31904.00 IOPS, 124.62 MiB/s [2024-11-26T17:40:13.381Z] 31701.33 IOPS, 123.83 MiB/s [2024-11-26T17:40:14.318Z] 30975.75 IOPS, 121.00 MiB/s [2024-11-26T17:40:14.318Z] 31167.80 IOPS, 121.75 MiB/s 00:43:13.624 Latency(us) 00:43:13.624 [2024-11-26T17:40:14.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.624 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:43:13.624 xnvme_bdev : 5.01 31135.27 121.62 0.00 0.00 2049.48 792.88 7369.51 00:43:13.624 [2024-11-26T17:40:14.319Z] =================================================================================================================== 00:43:13.625 [2024-11-26T17:40:14.319Z] Total : 31135.27 121.62 0.00 0.00 2049.48 792.88 7369.51 00:43:14.561 17:40:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:14.561 17:40:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:43:14.561 17:40:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:43:14.561 17:40:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:43:14.561 17:40:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:14.561 { 00:43:14.561 "subsystems": [ 00:43:14.561 { 00:43:14.561 "subsystem": "bdev", 00:43:14.561 "config": [ 00:43:14.561 { 00:43:14.561 "params": { 00:43:14.561 "io_mechanism": "io_uring", 00:43:14.561 "conserve_cpu": true, 00:43:14.561 "filename": "/dev/nvme0n1", 00:43:14.561 "name": "xnvme_bdev" 00:43:14.561 }, 00:43:14.561 "method": "bdev_xnvme_create" 00:43:14.561 }, 00:43:14.561 { 00:43:14.561 "method": "bdev_wait_for_examine" 00:43:14.561 } 00:43:14.561 ] 00:43:14.561 } 00:43:14.561 ] 00:43:14.561 } 00:43:14.819 [2024-11-26 17:40:15.283723] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:14.819 [2024-11-26 17:40:15.283861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72026 ] 00:43:14.819 [2024-11-26 17:40:15.471245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.078 [2024-11-26 17:40:15.581762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:15.337 Running I/O for 5 seconds... 00:43:17.647 23616.00 IOPS, 92.25 MiB/s [2024-11-26T17:40:19.278Z] 23264.00 IOPS, 90.88 MiB/s [2024-11-26T17:40:20.215Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-26T17:40:21.186Z] 23144.00 IOPS, 90.41 MiB/s 00:43:20.492 Latency(us) 00:43:20.492 [2024-11-26T17:40:21.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.492 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:43:20.492 xnvme_bdev : 5.01 23060.31 90.08 0.00 0.00 2765.97 1434.42 8159.10 00:43:20.492 [2024-11-26T17:40:21.186Z] =================================================================================================================== 00:43:20.492 [2024-11-26T17:40:21.186Z] Total : 23060.31 90.08 0.00 0.00 2765.97 1434.42 8159.10 00:43:21.427 00:43:21.427 real 0m13.798s 00:43:21.427 user 0m7.402s 00:43:21.427 sys 0m5.826s 00:43:21.427 ************************************ 00:43:21.427 END TEST xnvme_bdevperf 00:43:21.427 ************************************ 00:43:21.427 17:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.427 17:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:21.685 17:40:22 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:43:21.685 17:40:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:21.685 17:40:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:21.685 17:40:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:43:21.685 ************************************ 00:43:21.685 START TEST xnvme_fio_plugin 00:43:21.685 ************************************ 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:21.685 { 00:43:21.685 "subsystems": [ 00:43:21.685 { 00:43:21.685 "subsystem": "bdev", 00:43:21.685 "config": [ 00:43:21.685 { 00:43:21.685 "params": { 00:43:21.685 "io_mechanism": "io_uring", 00:43:21.685 "conserve_cpu": true, 00:43:21.685 "filename": "/dev/nvme0n1", 00:43:21.685 "name": "xnvme_bdev" 00:43:21.685 }, 00:43:21.685 "method": "bdev_xnvme_create" 00:43:21.685 }, 00:43:21.685 { 00:43:21.685 "method": "bdev_wait_for_examine" 00:43:21.685 } 00:43:21.685 ] 00:43:21.685 } 00:43:21.685 ] 00:43:21.685 } 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:21.685 17:40:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:21.944 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:43:21.944 fio-3.35 00:43:21.944 Starting 1 thread 00:43:28.533 00:43:28.533 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72145: Tue Nov 26 17:40:28 2024 00:43:28.533 read: IOPS=32.9k, BW=129MiB/s (135MB/s)(643MiB/5002msec) 00:43:28.533 slat (usec): min=3, max=126, avg= 5.05, stdev= 1.68 00:43:28.533 clat (usec): min=1202, max=2919, avg=1744.94, stdev=248.50 00:43:28.533 lat (usec): min=1205, max=2932, avg=1749.99, stdev=249.34 00:43:28.533 clat percentiles (usec): 00:43:28.533 | 1.00th=[ 1336], 5.00th=[ 1434], 10.00th=[ 1483], 20.00th=[ 1549], 00:43:28.533 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1696], 60.00th=[ 1745], 00:43:28.533 | 70.00th=[ 1827], 80.00th=[ 1909], 90.00th=[ 2089], 95.00th=[ 2245], 00:43:28.533 | 99.00th=[ 2540], 99.50th=[ 2606], 99.90th=[ 2737], 99.95th=[ 2769], 00:43:28.533 | 99.99th=[ 2835] 00:43:28.533 bw ( KiB/s): min=120832, max=142848, per=100.00%, avg=132323.56, stdev=8738.23, samples=9 00:43:28.533 iops : min=30208, max=35712, avg=33080.89, stdev=2184.56, samples=9 00:43:28.533 lat (msec) : 2=85.97%, 4=14.03% 00:43:28.533 cpu : usr=47.79%, sys=48.71%, ctx=13, majf=0, minf=762 00:43:28.533 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:43:28.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:28.533 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:43:28.533 issued rwts: total=164672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:28.533 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:28.533 00:43:28.533 Run status group 0 (all jobs): 00:43:28.533 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=643MiB (674MB), run=5002-5002msec 00:43:29.099 ----------------------------------------------------- 00:43:29.099 Suppressions used: 00:43:29.099 count bytes template 00:43:29.099 1 11 /usr/src/fio/parse.c 00:43:29.099 1 8 libtcmalloc_minimal.so 00:43:29.099 1 904 libcrypto.so 00:43:29.099 ----------------------------------------------------- 00:43:29.099 00:43:29.099 17:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:29.100 17:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:43:29.100 { 00:43:29.100 "subsystems": [ 00:43:29.100 { 00:43:29.100 "subsystem": "bdev", 00:43:29.100 "config": [ 00:43:29.100 { 00:43:29.100 "params": { 00:43:29.100 "io_mechanism": "io_uring", 00:43:29.100 "conserve_cpu": true, 00:43:29.100 "filename": "/dev/nvme0n1", 00:43:29.100 "name": "xnvme_bdev" 00:43:29.100 }, 00:43:29.100 "method": "bdev_xnvme_create" 00:43:29.100 }, 00:43:29.100 { 00:43:29.100 "method": "bdev_wait_for_examine" 00:43:29.100 } 00:43:29.100 ] 00:43:29.100 } 00:43:29.100 ] 00:43:29.100 } 00:43:29.359 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:43:29.359 fio-3.35 00:43:29.359 Starting 1 thread 00:43:35.929 00:43:35.929 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72243: Tue Nov 26 17:40:35 2024 00:43:35.929 write: IOPS=32.4k, BW=127MiB/s (133MB/s)(633MiB/5002msec); 0 zone resets 00:43:35.929 slat (nsec): min=3723, max=83144, avg=5410.65, stdev=1896.71 00:43:35.929 clat (usec): min=320, max=4472, avg=1764.07, stdev=260.16 00:43:35.929 lat (usec): min=326, max=4478, avg=1769.48, stdev=261.07 00:43:35.929 clat percentiles (usec): 00:43:35.929 | 1.00th=[ 1369], 5.00th=[ 1434], 10.00th=[ 1483], 20.00th=[ 1549], 00:43:35.929 | 30.00th=[ 1614], 40.00th=[ 1663], 50.00th=[ 1713], 60.00th=[ 1762], 00:43:35.929 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2147], 95.00th=[ 2278], 00:43:35.929 | 99.00th=[ 2540], 99.50th=[ 2606], 99.90th=[ 2769], 99.95th=[ 2868], 00:43:35.929 | 99.99th=[ 3163] 00:43:35.929 bw ( KiB/s): min=111393, max=144384, per=100.00%, avg=130238.33, stdev=12263.78, samples=9 00:43:35.929 iops : min=27848, max=36096, avg=32559.56, stdev=3065.99, samples=9 00:43:35.929 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:43:35.929 lat (msec) : 2=82.95%, 4=17.03%, 10=0.01% 00:43:35.929 cpu : usr=49.37%, sys=47.31%, ctx=11, majf=0, minf=763 00:43:35.929 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:43:35.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:35.929 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:43:35.929 issued rwts: total=0,162146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:35.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:43:35.929 00:43:35.929 Run status group 0 (all jobs): 00:43:35.929 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=633MiB (664MB), run=5002-5002msec 00:43:36.498 ----------------------------------------------------- 00:43:36.498 Suppressions used: 00:43:36.498 count bytes template 00:43:36.498 1 11 /usr/src/fio/parse.c 00:43:36.498 1 8 libtcmalloc_minimal.so 00:43:36.498 1 904 libcrypto.so 00:43:36.498 ----------------------------------------------------- 00:43:36.498 00:43:36.498 00:43:36.498 real 0m14.859s 00:43:36.498 user 0m8.611s 00:43:36.498 sys 0m5.645s 00:43:36.498 ************************************ 00:43:36.498 END TEST xnvme_fio_plugin 00:43:36.498 ************************************ 00:43:36.498 17:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:36.498 17:40:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:43:36.498 17:40:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:43:36.498 17:40:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:36.498 17:40:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:36.498 17:40:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:43:36.498 ************************************ 00:43:36.498 START TEST xnvme_rpc 00:43:36.498 ************************************ 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72329 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72329 00:43:36.498 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72329 ']' 00:43:36.499 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:36.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:36.499 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:36.499 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:36.499 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:36.499 17:40:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:36.758 [2024-11-26 17:40:37.215069] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:36.758 [2024-11-26 17:40:37.215204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72329 ] 00:43:36.758 [2024-11-26 17:40:37.400415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:37.017 [2024-11-26 17:40:37.511299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.954 xnvme_bdev 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:43:37.954 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72329 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72329 ']' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72329 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72329 00:43:37.955 killing process with pid 72329 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72329' 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72329 00:43:37.955 17:40:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72329 00:43:40.564 ************************************ 00:43:40.564 END TEST xnvme_rpc 00:43:40.564 ************************************ 00:43:40.564 00:43:40.564 real 0m3.954s 00:43:40.564 user 0m3.992s 00:43:40.564 sys 0m0.570s 00:43:40.564 17:40:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:40.564 17:40:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:43:40.564 17:40:41 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:43:40.564 17:40:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:40.564 17:40:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:40.564 17:40:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:43:40.564 ************************************ 00:43:40.564 START TEST xnvme_bdevperf 00:43:40.564 ************************************ 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:43:40.564 17:40:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:40.564 { 00:43:40.564 "subsystems": [ 00:43:40.564 { 00:43:40.564 "subsystem": "bdev", 00:43:40.564 "config": [ 00:43:40.564 { 00:43:40.564 "params": { 00:43:40.564 "io_mechanism": "io_uring_cmd", 00:43:40.564 "conserve_cpu": false, 00:43:40.564 "filename": "/dev/ng0n1", 00:43:40.564 "name": "xnvme_bdev" 00:43:40.564 }, 00:43:40.564 "method": "bdev_xnvme_create" 00:43:40.564 }, 00:43:40.564 { 00:43:40.564 "method": "bdev_wait_for_examine" 00:43:40.564 } 00:43:40.564 ] 00:43:40.564 } 00:43:40.564 ] 00:43:40.564 } 00:43:40.564 [2024-11-26 17:40:41.232236] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:40.564 [2024-11-26 17:40:41.232372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72414 ] 00:43:40.823 [2024-11-26 17:40:41.421328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:41.082 [2024-11-26 17:40:41.531098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:41.341 Running I/O for 5 seconds... 00:43:43.213 27776.00 IOPS, 108.50 MiB/s [2024-11-26T17:40:45.286Z] 27424.00 IOPS, 107.12 MiB/s [2024-11-26T17:40:46.223Z] 28117.33 IOPS, 109.83 MiB/s [2024-11-26T17:40:47.160Z] 26992.00 IOPS, 105.44 MiB/s 00:43:46.466 Latency(us) 00:43:46.466 [2024-11-26T17:40:47.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:46.466 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:43:46.466 xnvme_bdev : 5.01 26255.68 102.56 0.00 0.00 2429.80 1112.01 7895.90 00:43:46.466 [2024-11-26T17:40:47.160Z] =================================================================================================================== 00:43:46.466 [2024-11-26T17:40:47.160Z] Total : 26255.68 102.56 0.00 0.00 2429.80 1112.01 7895.90 00:43:47.405 17:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:47.405 17:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:43:47.405 17:40:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:43:47.405 17:40:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:43:47.405 17:40:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:47.405 { 00:43:47.405 "subsystems": [ 00:43:47.405 { 00:43:47.405 "subsystem": "bdev", 00:43:47.405 "config": [ 00:43:47.405 { 00:43:47.405 "params": { 00:43:47.405 "io_mechanism": "io_uring_cmd", 00:43:47.405 "conserve_cpu": false, 00:43:47.405 "filename": "/dev/ng0n1", 00:43:47.405 "name": "xnvme_bdev" 00:43:47.405 }, 00:43:47.405 "method": "bdev_xnvme_create" 00:43:47.405 }, 00:43:47.405 { 00:43:47.405 "method": "bdev_wait_for_examine" 00:43:47.405 } 00:43:47.405 ] 00:43:47.405 } 00:43:47.405 ] 00:43:47.405 } 00:43:47.664 [2024-11-26 17:40:48.141416] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:47.664 [2024-11-26 17:40:48.141577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:43:47.665 [2024-11-26 17:40:48.328215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:47.923 [2024-11-26 17:40:48.438517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:48.182 Running I/O for 5 seconds... 00:43:50.499 24960.00 IOPS, 97.50 MiB/s [2024-11-26T17:40:52.131Z] 24288.00 IOPS, 94.88 MiB/s [2024-11-26T17:40:53.070Z] 23829.33 IOPS, 93.08 MiB/s [2024-11-26T17:40:54.008Z] 23632.00 IOPS, 92.31 MiB/s 00:43:53.314 Latency(us) 00:43:53.314 [2024-11-26T17:40:54.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:53.314 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:43:53.314 xnvme_bdev : 5.01 23565.28 92.05 0.00 0.00 2706.52 1276.50 7685.35 00:43:53.314 [2024-11-26T17:40:54.008Z] =================================================================================================================== 00:43:53.314 [2024-11-26T17:40:54.008Z] Total : 23565.28 92.05 0.00 0.00 2706.52 1276.50 7685.35 00:43:54.691 17:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:43:54.691 17:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:43:54.691 17:40:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:43:54.691 17:40:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:43:54.691 17:40:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:43:54.691 { 00:43:54.691 "subsystems": [ 00:43:54.691 { 00:43:54.691 "subsystem": "bdev", 00:43:54.691 "config": [ 00:43:54.691 { 00:43:54.691 "params": { 00:43:54.691 "io_mechanism": "io_uring_cmd", 00:43:54.691 "conserve_cpu": false, 00:43:54.691 "filename": "/dev/ng0n1", 00:43:54.691 "name": "xnvme_bdev" 00:43:54.691 }, 00:43:54.691 "method": "bdev_xnvme_create" 00:43:54.691 }, 00:43:54.691 { 00:43:54.691 "method": "bdev_wait_for_examine" 00:43:54.691 } 00:43:54.691 ] 00:43:54.691 } 00:43:54.691 ] 00:43:54.691 } 00:43:54.691 [2024-11-26 17:40:55.052564] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:43:54.691 [2024-11-26 17:40:55.052863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72573 ] 00:43:54.691 [2024-11-26 17:40:55.238997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:54.691 [2024-11-26 17:40:55.351958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:55.258 Running I/O for 5 seconds... 00:43:57.125 66048.00 IOPS, 258.00 MiB/s [2024-11-26T17:40:58.756Z] 63616.00 IOPS, 248.50 MiB/s [2024-11-26T17:40:59.707Z] 61034.67 IOPS, 238.42 MiB/s [2024-11-26T17:41:01.083Z] 60208.00 IOPS, 235.19 MiB/s [2024-11-26T17:41:01.083Z] 61440.00 IOPS, 240.00 MiB/s 00:44:00.389 Latency(us) 00:44:00.389 [2024-11-26T17:41:01.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.389 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:44:00.389 xnvme_bdev : 5.00 61427.47 239.95 0.00 0.00 1038.21 605.35 4842.82 00:44:00.389 [2024-11-26T17:41:01.083Z] =================================================================================================================== 00:44:00.389 [2024-11-26T17:41:01.083Z] Total : 61427.47 239.95 0.00 0.00 1038.21 605.35 4842.82 00:44:01.323 17:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:01.323 17:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:44:01.323 17:41:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:44:01.323 17:41:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:44:01.323 17:41:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:01.323 { 00:44:01.323 "subsystems": [ 00:44:01.323 { 00:44:01.323 "subsystem": "bdev", 00:44:01.323 "config": [ 00:44:01.323 { 00:44:01.323 "params": { 00:44:01.323 "io_mechanism": "io_uring_cmd", 00:44:01.323 "conserve_cpu": false, 00:44:01.323 "filename": "/dev/ng0n1", 00:44:01.323 "name": "xnvme_bdev" 00:44:01.323 }, 00:44:01.323 "method": "bdev_xnvme_create" 00:44:01.323 }, 00:44:01.323 { 00:44:01.323 "method": "bdev_wait_for_examine" 00:44:01.323 } 00:44:01.323 ] 00:44:01.323 } 00:44:01.323 ] 00:44:01.323 } 00:44:01.323 [2024-11-26 17:41:01.940459] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:01.323 [2024-11-26 17:41:01.940604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72656 ] 00:44:01.582 [2024-11-26 17:41:02.127867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:01.582 [2024-11-26 17:41:02.237669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:02.149 Running I/O for 5 seconds... 00:44:04.020 60350.00 IOPS, 235.74 MiB/s [2024-11-26T17:41:05.648Z] 37047.50 IOPS, 144.72 MiB/s [2024-11-26T17:41:06.583Z] 37205.33 IOPS, 145.33 MiB/s [2024-11-26T17:41:07.957Z] 39329.75 IOPS, 153.63 MiB/s [2024-11-26T17:41:07.957Z] 43286.20 IOPS, 169.09 MiB/s 00:44:07.263 Latency(us) 00:44:07.263 [2024-11-26T17:41:07.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:07.263 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:44:07.263 xnvme_bdev : 5.00 43277.02 169.05 0.00 0.00 1475.49 67.44 46533.19 00:44:07.263 [2024-11-26T17:41:07.957Z] =================================================================================================================== 00:44:07.263 [2024-11-26T17:41:07.957Z] Total : 43277.02 169.05 0.00 0.00 1475.49 67.44 46533.19 00:44:08.202 00:44:08.202 real 0m27.590s 00:44:08.202 user 0m14.607s 00:44:08.202 sys 0m12.519s 00:44:08.202 17:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:08.202 ************************************ 00:44:08.202 END TEST xnvme_bdevperf 00:44:08.202 ************************************ 00:44:08.202 17:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:08.202 17:41:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:44:08.202 17:41:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:08.202 17:41:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:08.202 17:41:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:44:08.202 ************************************ 00:44:08.202 START TEST xnvme_fio_plugin 00:44:08.202 ************************************ 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:08.202 17:41:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:08.202 { 00:44:08.202 "subsystems": [ 00:44:08.202 { 00:44:08.202 "subsystem": "bdev", 00:44:08.202 "config": [ 00:44:08.202 { 00:44:08.202 "params": { 00:44:08.202 "io_mechanism": "io_uring_cmd", 00:44:08.202 "conserve_cpu": false, 00:44:08.202 "filename": "/dev/ng0n1", 00:44:08.202 "name": "xnvme_bdev" 00:44:08.202 }, 00:44:08.202 "method": "bdev_xnvme_create" 00:44:08.202 }, 00:44:08.202 { 00:44:08.202 "method": "bdev_wait_for_examine" 00:44:08.202 } 00:44:08.202 ] 00:44:08.202 } 00:44:08.202 ] 00:44:08.202 } 00:44:08.461 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:44:08.461 fio-3.35 00:44:08.461 Starting 1 thread 00:44:15.044 00:44:15.045 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72776: Tue Nov 26 17:41:14 2024 00:44:15.045 read: IOPS=31.5k, BW=123MiB/s (129MB/s)(616MiB/5003msec) 00:44:15.045 slat (nsec): min=2262, max=71466, avg=5543.38, stdev=2845.45 00:44:15.045 clat (usec): min=943, max=5821, avg=1807.60, stdev=436.90 00:44:15.045 lat (usec): min=946, max=5832, avg=1813.14, stdev=438.62 00:44:15.045 clat percentiles (usec): 00:44:15.045 | 1.00th=[ 1074], 5.00th=[ 1172], 10.00th=[ 1237], 20.00th=[ 1336], 00:44:15.045 | 30.00th=[ 1483], 40.00th=[ 1680], 50.00th=[ 1844], 60.00th=[ 1975], 00:44:15.045 | 70.00th=[ 2089], 80.00th=[ 2212], 90.00th=[ 2343], 95.00th=[ 2474], 00:44:15.045 | 99.00th=[ 2638], 99.50th=[ 2737], 99.90th=[ 3163], 99.95th=[ 5276], 00:44:15.045 | 99.99th=[ 5669] 00:44:15.045 bw ( KiB/s): min=106496, max=145408, per=100.00%, avg=126862.22, stdev=11987.74, samples=9 00:44:15.045 iops : min=26624, max=36352, avg=31715.56, stdev=2996.93, samples=9 00:44:15.045 lat (usec) : 1000=0.07% 00:44:15.045 lat (msec) : 2=62.10%, 4=37.74%, 10=0.08% 00:44:15.045 cpu : usr=35.95%, sys=62.97%, ctx=26, majf=0, minf=762 00:44:15.045 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:44:15.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:15.045 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:44:15.045 issued rwts: total=157823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:15.045 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:15.045 00:44:15.045 Run status group 0 (all jobs): 00:44:15.045 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=616MiB (646MB), run=5003-5003msec 00:44:15.613 ----------------------------------------------------- 00:44:15.613 Suppressions used: 00:44:15.613 count bytes template 00:44:15.613 1 11 /usr/src/fio/parse.c 00:44:15.613 1 8 libtcmalloc_minimal.so 00:44:15.613 1 904 libcrypto.so 00:44:15.613 ----------------------------------------------------- 00:44:15.613 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:44:15.613 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:15.614 17:41:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:15.614 { 00:44:15.614 "subsystems": [ 00:44:15.614 { 00:44:15.614 "subsystem": "bdev", 00:44:15.614 "config": [ 00:44:15.614 { 00:44:15.614 "params": { 00:44:15.614 "io_mechanism": "io_uring_cmd", 00:44:15.614 "conserve_cpu": false, 00:44:15.614 "filename": "/dev/ng0n1", 00:44:15.614 "name": "xnvme_bdev" 00:44:15.614 }, 00:44:15.614 "method": "bdev_xnvme_create" 00:44:15.614 }, 00:44:15.614 { 00:44:15.614 "method": "bdev_wait_for_examine" 00:44:15.614 } 00:44:15.614 ] 00:44:15.614 } 00:44:15.614 ] 00:44:15.614 } 00:44:15.873 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:44:15.873 fio-3.35 00:44:15.873 Starting 1 thread 00:44:22.447 00:44:22.447 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72873: Tue Nov 26 17:41:22 2024 00:44:22.447 write: IOPS=26.5k, BW=103MiB/s (108MB/s)(517MiB/5001msec); 0 zone resets 00:44:22.447 slat (usec): min=2, max=229, avg= 7.36, stdev= 3.25 00:44:22.447 clat (usec): min=795, max=5718, avg=2122.82, stdev=441.08 00:44:22.447 lat (usec): min=798, max=5745, avg=2130.18, stdev=442.79 00:44:22.447 clat percentiles (usec): 00:44:22.447 | 1.00th=[ 955], 5.00th=[ 1074], 10.00th=[ 1254], 20.00th=[ 1958], 00:44:22.447 | 30.00th=[ 2073], 40.00th=[ 2147], 50.00th=[ 2212], 60.00th=[ 2278], 00:44:22.447 | 70.00th=[ 2343], 80.00th=[ 2442], 90.00th=[ 2573], 95.00th=[ 2638], 00:44:22.447 | 99.00th=[ 2737], 99.50th=[ 2802], 99.90th=[ 2999], 99.95th=[ 3261], 00:44:22.447 | 99.99th=[ 5604] 00:44:22.447 bw ( KiB/s): min=94208, max=169133, per=100.00%, avg=107254.78, stdev=23842.24, samples=9 00:44:22.447 iops : min=23552, max=42283, avg=26813.67, stdev=5960.48, samples=9 00:44:22.447 lat (usec) : 1000=2.41% 00:44:22.447 lat (msec) : 2=21.03%, 4=76.51%, 10=0.05% 00:44:22.447 cpu : usr=38.80%, sys=59.86%, ctx=13, majf=0, minf=763 00:44:22.447 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:44:22.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:22.447 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:44:22.447 issued rwts: total=0,132416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:22.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:44:22.447 00:44:22.447 Run status group 0 (all jobs): 00:44:22.447 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=517MiB (542MB), run=5001-5001msec 00:44:23.017 ----------------------------------------------------- 00:44:23.017 Suppressions used: 00:44:23.017 count bytes template 00:44:23.017 1 11 /usr/src/fio/parse.c 00:44:23.017 1 8 libtcmalloc_minimal.so 00:44:23.017 1 904 libcrypto.so 00:44:23.017 ----------------------------------------------------- 00:44:23.017 00:44:23.275 00:44:23.275 real 0m14.911s 00:44:23.275 user 0m7.673s 00:44:23.275 sys 0m6.864s 00:44:23.275 ************************************ 00:44:23.275 END TEST xnvme_fio_plugin 00:44:23.275 ************************************ 00:44:23.275 17:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:23.275 17:41:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:44:23.275 17:41:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:44:23.275 17:41:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:44:23.275 17:41:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:44:23.275 17:41:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:44:23.276 17:41:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:23.276 17:41:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:23.276 17:41:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:44:23.276 ************************************ 00:44:23.276 START TEST xnvme_rpc 00:44:23.276 ************************************ 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72959 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72959 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72959 ']' 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:23.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:23.276 17:41:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:23.276 [2024-11-26 17:41:23.909600] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:23.276 [2024-11-26 17:41:23.909744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72959 ] 00:44:23.534 [2024-11-26 17:41:24.098459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:23.534 [2024-11-26 17:41:24.210873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.471 xnvme_bdev 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.471 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72959 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72959 ']' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72959 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72959 00:44:24.736 killing process with pid 72959 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72959' 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72959 00:44:24.736 17:41:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72959 00:44:27.292 00:44:27.292 real 0m4.010s 00:44:27.292 user 0m4.046s 00:44:27.292 sys 0m0.603s 00:44:27.292 17:41:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:27.292 17:41:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:27.292 ************************************ 00:44:27.292 END TEST xnvme_rpc 00:44:27.292 ************************************ 00:44:27.292 17:41:27 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:44:27.292 17:41:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:27.292 17:41:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:27.292 17:41:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:44:27.292 ************************************ 00:44:27.292 START TEST xnvme_bdevperf 00:44:27.292 ************************************ 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:44:27.292 17:41:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:27.292 { 00:44:27.292 "subsystems": [ 00:44:27.292 { 00:44:27.292 "subsystem": "bdev", 00:44:27.292 "config": [ 00:44:27.292 { 00:44:27.292 "params": { 00:44:27.292 "io_mechanism": "io_uring_cmd", 00:44:27.292 "conserve_cpu": true, 00:44:27.292 "filename": "/dev/ng0n1", 00:44:27.292 "name": "xnvme_bdev" 00:44:27.292 }, 00:44:27.292 "method": "bdev_xnvme_create" 00:44:27.292 }, 00:44:27.292 { 00:44:27.292 "method": "bdev_wait_for_examine" 00:44:27.292 } 00:44:27.292 ] 00:44:27.292 } 00:44:27.292 ] 00:44:27.292 } 00:44:27.292 [2024-11-26 17:41:27.973768] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:27.292 [2024-11-26 17:41:27.974092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73039 ] 00:44:27.551 [2024-11-26 17:41:28.155635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.810 [2024-11-26 17:41:28.263511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:28.069 Running I/O for 5 seconds... 00:44:29.946 29440.00 IOPS, 115.00 MiB/s [2024-11-26T17:41:32.016Z] 27296.00 IOPS, 106.62 MiB/s [2024-11-26T17:41:32.953Z] 27093.33 IOPS, 105.83 MiB/s [2024-11-26T17:41:33.891Z] 28224.00 IOPS, 110.25 MiB/s 00:44:33.197 Latency(us) 00:44:33.197 [2024-11-26T17:41:33.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.197 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:44:33.197 xnvme_bdev : 5.01 27313.71 106.69 0.00 0.00 2335.67 914.61 8211.74 00:44:33.197 [2024-11-26T17:41:33.891Z] =================================================================================================================== 00:44:33.197 [2024-11-26T17:41:33.891Z] Total : 27313.71 106.69 0.00 0.00 2335.67 914.61 8211.74 00:44:34.136 17:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:34.136 17:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:44:34.136 17:41:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:44:34.136 17:41:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:44:34.136 17:41:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:34.136 { 00:44:34.136 "subsystems": [ 00:44:34.136 { 00:44:34.136 "subsystem": "bdev", 00:44:34.136 "config": [ 00:44:34.136 { 00:44:34.136 "params": { 00:44:34.136 "io_mechanism": "io_uring_cmd", 00:44:34.136 "conserve_cpu": true, 00:44:34.136 "filename": "/dev/ng0n1", 00:44:34.136 "name": "xnvme_bdev" 00:44:34.136 }, 00:44:34.136 "method": "bdev_xnvme_create" 00:44:34.136 }, 00:44:34.136 { 00:44:34.136 "method": "bdev_wait_for_examine" 00:44:34.136 } 00:44:34.136 ] 00:44:34.136 } 00:44:34.136 ] 00:44:34.136 } 00:44:34.136 [2024-11-26 17:41:34.819876] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:34.136 [2024-11-26 17:41:34.820013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73119 ] 00:44:34.396 [2024-11-26 17:41:35.004578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:34.655 [2024-11-26 17:41:35.110974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:34.915 Running I/O for 5 seconds... 00:44:36.785 26228.00 IOPS, 102.45 MiB/s [2024-11-26T17:41:38.851Z] 25146.00 IOPS, 98.23 MiB/s [2024-11-26T17:41:39.788Z] 24657.33 IOPS, 96.32 MiB/s [2024-11-26T17:41:40.750Z] 25437.00 IOPS, 99.36 MiB/s 00:44:40.056 Latency(us) 00:44:40.056 [2024-11-26T17:41:40.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:40.056 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:44:40.056 xnvme_bdev : 5.01 25231.27 98.56 0.00 0.00 2527.96 875.13 8211.74 00:44:40.056 [2024-11-26T17:41:40.750Z] =================================================================================================================== 00:44:40.056 [2024-11-26T17:41:40.750Z] Total : 25231.27 98.56 0.00 0.00 2527.96 875.13 8211.74 00:44:40.990 17:41:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:40.990 17:41:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:44:40.990 17:41:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:44:40.990 17:41:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:44:40.990 17:41:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:40.990 { 00:44:40.990 "subsystems": [ 00:44:40.990 { 00:44:40.990 "subsystem": "bdev", 00:44:40.990 "config": [ 00:44:40.990 { 00:44:40.990 "params": { 00:44:40.990 "io_mechanism": "io_uring_cmd", 00:44:40.991 "conserve_cpu": true, 00:44:40.991 "filename": "/dev/ng0n1", 00:44:40.991 "name": "xnvme_bdev" 00:44:40.991 }, 00:44:40.991 "method": "bdev_xnvme_create" 00:44:40.991 }, 00:44:40.991 { 00:44:40.991 "method": "bdev_wait_for_examine" 00:44:40.991 } 00:44:40.991 ] 00:44:40.991 } 00:44:40.991 ] 00:44:40.991 } 00:44:40.991 [2024-11-26 17:41:41.605382] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:40.991 [2024-11-26 17:41:41.605530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73193 ] 00:44:41.248 [2024-11-26 17:41:41.790103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:41.248 [2024-11-26 17:41:41.892623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:41.815 Running I/O for 5 seconds... 00:44:43.687 71488.00 IOPS, 279.25 MiB/s [2024-11-26T17:41:45.318Z] 71872.00 IOPS, 280.75 MiB/s [2024-11-26T17:41:46.254Z] 72426.67 IOPS, 282.92 MiB/s [2024-11-26T17:41:47.632Z] 71584.00 IOPS, 279.62 MiB/s 00:44:46.938 Latency(us) 00:44:46.938 [2024-11-26T17:41:47.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:46.938 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:44:46.938 xnvme_bdev : 5.00 71048.32 277.53 0.00 0.00 898.00 625.09 2868.84 00:44:46.938 [2024-11-26T17:41:47.632Z] =================================================================================================================== 00:44:46.938 [2024-11-26T17:41:47.632Z] Total : 71048.32 277.53 0.00 0.00 898.00 625.09 2868.84 00:44:47.875 17:41:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:47.875 17:41:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:44:47.875 17:41:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:44:47.875 17:41:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:44:47.875 17:41:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:47.875 { 00:44:47.875 "subsystems": [ 00:44:47.875 { 00:44:47.875 "subsystem": "bdev", 00:44:47.875 "config": [ 00:44:47.875 { 00:44:47.875 "params": { 00:44:47.875 "io_mechanism": "io_uring_cmd", 00:44:47.875 "conserve_cpu": true, 00:44:47.875 "filename": "/dev/ng0n1", 00:44:47.876 "name": "xnvme_bdev" 00:44:47.876 }, 00:44:47.876 "method": "bdev_xnvme_create" 00:44:47.876 }, 00:44:47.876 { 00:44:47.876 "method": "bdev_wait_for_examine" 00:44:47.876 } 00:44:47.876 ] 00:44:47.876 } 00:44:47.876 ] 00:44:47.876 } 00:44:47.876 [2024-11-26 17:41:48.402849] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:44:47.876 [2024-11-26 17:41:48.403000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73273 ] 00:44:48.134 [2024-11-26 17:41:48.590585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.134 [2024-11-26 17:41:48.703609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.393 Running I/O for 5 seconds... 00:44:50.700 45410.00 IOPS, 177.38 MiB/s [2024-11-26T17:41:52.327Z] 48584.50 IOPS, 189.78 MiB/s [2024-11-26T17:41:53.261Z] 51230.33 IOPS, 200.12 MiB/s [2024-11-26T17:41:54.197Z] 49486.00 IOPS, 193.30 MiB/s 00:44:53.503 Latency(us) 00:44:53.503 [2024-11-26T17:41:54.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:53.503 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:44:53.503 xnvme_bdev : 5.00 49742.64 194.31 0.00 0.00 1280.78 65.80 16002.36 00:44:53.503 [2024-11-26T17:41:54.197Z] =================================================================================================================== 00:44:53.503 [2024-11-26T17:41:54.197Z] Total : 49742.64 194.31 0.00 0.00 1280.78 65.80 16002.36 00:44:54.440 00:44:54.440 real 0m27.234s 00:44:54.440 user 0m16.897s 00:44:54.440 sys 0m8.175s 00:44:54.440 17:41:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:54.440 17:41:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:44:54.440 ************************************ 00:44:54.440 END TEST xnvme_bdevperf 00:44:54.440 ************************************ 00:44:54.699 17:41:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:44:54.699 17:41:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:54.699 17:41:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:54.699 17:41:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:44:54.699 ************************************ 00:44:54.699 START TEST xnvme_fio_plugin 00:44:54.699 ************************************ 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:44:54.699 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:54.699 { 00:44:54.699 "subsystems": [ 00:44:54.699 { 00:44:54.699 "subsystem": "bdev", 00:44:54.699 "config": [ 00:44:54.699 { 00:44:54.699 "params": { 00:44:54.699 "io_mechanism": "io_uring_cmd", 00:44:54.699 "conserve_cpu": true, 00:44:54.699 "filename": "/dev/ng0n1", 00:44:54.699 "name": "xnvme_bdev" 00:44:54.699 }, 00:44:54.699 "method": "bdev_xnvme_create" 00:44:54.699 }, 00:44:54.699 { 00:44:54.699 "method": "bdev_wait_for_examine" 00:44:54.699 } 00:44:54.699 ] 00:44:54.699 } 00:44:54.699 ] 00:44:54.699 } 00:44:54.700 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:54.700 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:54.700 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:44:54.700 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:44:54.700 17:41:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:44:54.959 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:44:54.959 fio-3.35 00:44:54.959 Starting 1 thread 00:45:01.524 00:45:01.524 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73391: Tue Nov 26 17:42:01 2024 00:45:01.524 read: IOPS=30.9k, BW=121MiB/s (126MB/s)(603MiB/5002msec) 00:45:01.524 slat (usec): min=2, max=145, avg= 5.80, stdev= 2.11 00:45:01.524 clat (usec): min=1124, max=13424, avg=1843.94, stdev=292.06 00:45:01.524 lat (usec): min=1128, max=13429, avg=1849.74, stdev=293.05 00:45:01.524 clat percentiles (usec): 00:45:01.524 | 1.00th=[ 1336], 5.00th=[ 1450], 10.00th=[ 1516], 20.00th=[ 1598], 00:45:01.524 | 30.00th=[ 1680], 40.00th=[ 1729], 50.00th=[ 1795], 60.00th=[ 1876], 00:45:01.524 | 70.00th=[ 1958], 80.00th=[ 2073], 90.00th=[ 2245], 95.00th=[ 2376], 00:45:01.524 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 3261], 99.95th=[ 3982], 00:45:01.524 | 99.99th=[ 5211] 00:45:01.524 bw ( KiB/s): min=116224, max=135640, per=99.85%, avg=123302.33, stdev=6258.30, samples=9 00:45:01.524 iops : min=29056, max=33910, avg=30825.56, stdev=1564.55, samples=9 00:45:01.524 lat (msec) : 2=74.08%, 4=25.87%, 10=0.05%, 20=0.01% 00:45:01.524 cpu : usr=50.91%, sys=46.47%, ctx=22, majf=0, minf=762 00:45:01.524 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:45:01.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:01.524 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:45:01.524 issued rwts: total=154427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:01.524 latency : target=0, window=0, percentile=100.00%, depth=64 00:45:01.524 00:45:01.524 Run status group 0 (all jobs): 00:45:01.524 READ: bw=121MiB/s (126MB/s), 121MiB/s-121MiB/s (126MB/s-126MB/s), io=603MiB (633MB), run=5002-5002msec 00:45:01.783 ----------------------------------------------------- 00:45:01.783 Suppressions used: 00:45:01.783 count bytes template 00:45:01.783 1 11 /usr/src/fio/parse.c 00:45:01.783 1 8 libtcmalloc_minimal.so 00:45:01.783 1 904 libcrypto.so 00:45:01.783 ----------------------------------------------------- 00:45:01.783 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:02.042 17:42:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:45:02.042 { 00:45:02.042 "subsystems": [ 00:45:02.042 { 00:45:02.042 "subsystem": "bdev", 00:45:02.042 "config": [ 00:45:02.042 { 00:45:02.042 "params": { 00:45:02.042 "io_mechanism": "io_uring_cmd", 00:45:02.042 "conserve_cpu": true, 00:45:02.042 "filename": "/dev/ng0n1", 00:45:02.042 "name": "xnvme_bdev" 00:45:02.042 }, 00:45:02.042 "method": "bdev_xnvme_create" 00:45:02.042 }, 00:45:02.042 { 00:45:02.042 "method": "bdev_wait_for_examine" 00:45:02.042 } 00:45:02.042 ] 00:45:02.042 } 00:45:02.042 ] 00:45:02.042 } 00:45:02.301 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:45:02.301 fio-3.35 00:45:02.301 Starting 1 thread 00:45:08.868 00:45:08.868 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73482: Tue Nov 26 17:42:08 2024 00:45:08.868 write: IOPS=30.0k, BW=117MiB/s (123MB/s)(587MiB/5002msec); 0 zone resets 00:45:08.868 slat (usec): min=2, max=451, avg= 6.31, stdev= 4.46 00:45:08.868 clat (usec): min=58, max=7927, avg=1895.08, stdev=697.60 00:45:08.868 lat (usec): min=62, max=7930, avg=1901.39, stdev=698.20 00:45:08.868 clat percentiles (usec): 00:45:08.868 | 1.00th=[ 167], 5.00th=[ 1004], 10.00th=[ 1401], 20.00th=[ 1582], 00:45:08.868 | 30.00th=[ 1680], 40.00th=[ 1762], 50.00th=[ 1860], 60.00th=[ 1942], 00:45:08.868 | 70.00th=[ 2040], 80.00th=[ 2180], 90.00th=[ 2376], 95.00th=[ 2540], 00:45:08.868 | 99.00th=[ 5342], 99.50th=[ 5932], 99.90th=[ 6980], 99.95th=[ 7308], 00:45:08.868 | 99.99th=[ 7701] 00:45:08.868 bw ( KiB/s): min=109568, max=132152, per=99.90%, avg=120067.56, stdev=7537.34, samples=9 00:45:08.868 iops : min=27392, max=33038, avg=30016.89, stdev=1884.33, samples=9 00:45:08.869 lat (usec) : 100=0.23%, 250=1.59%, 500=1.31%, 750=0.46%, 1000=1.39% 00:45:08.869 lat (msec) : 2=61.08%, 4=31.87%, 10=2.08% 00:45:08.869 cpu : usr=50.29%, sys=45.05%, ctx=11, majf=0, minf=763 00:45:08.869 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.5%, 16=23.4%, 32=52.5%, >=64=2.7% 00:45:08.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:08.869 complete : 0=0.0%, 4=98.1%, 8=0.2%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0% 00:45:08.869 issued rwts: total=0,150300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:08.869 latency : target=0, window=0, percentile=100.00%, depth=64 00:45:08.869 00:45:08.869 Run status group 0 (all jobs): 00:45:08.869 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=587MiB (616MB), run=5002-5002msec 00:45:09.436 ----------------------------------------------------- 00:45:09.436 Suppressions used: 00:45:09.436 count bytes template 00:45:09.436 1 11 /usr/src/fio/parse.c 00:45:09.436 1 8 libtcmalloc_minimal.so 00:45:09.436 1 904 libcrypto.so 00:45:09.436 ----------------------------------------------------- 00:45:09.436 00:45:09.436 00:45:09.436 real 0m14.751s 00:45:09.436 user 0m8.732s 00:45:09.436 sys 0m5.399s 00:45:09.436 17:42:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:09.436 17:42:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:45:09.436 ************************************ 00:45:09.436 END TEST xnvme_fio_plugin 00:45:09.436 ************************************ 00:45:09.436 17:42:09 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72959 00:45:09.436 17:42:09 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72959 ']' 00:45:09.436 17:42:09 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72959 00:45:09.436 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72959) - No such process 00:45:09.436 Process with pid 72959 is not found 00:45:09.436 17:42:09 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72959 is not found' 00:45:09.436 00:45:09.436 real 3m51.022s 00:45:09.436 user 2m4.776s 00:45:09.436 sys 1m30.971s 00:45:09.436 17:42:10 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:09.436 ************************************ 00:45:09.436 END TEST nvme_xnvme 00:45:09.436 ************************************ 00:45:09.436 17:42:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:09.436 17:42:10 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:45:09.436 17:42:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:09.436 17:42:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:09.436 17:42:10 -- common/autotest_common.sh@10 -- # set +x 00:45:09.436 ************************************ 00:45:09.436 START TEST blockdev_xnvme 00:45:09.436 ************************************ 00:45:09.436 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:45:09.695 * Looking for test storage... 00:45:09.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:09.695 17:42:10 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:45:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.695 --rc genhtml_branch_coverage=1 00:45:09.695 --rc genhtml_function_coverage=1 00:45:09.695 --rc genhtml_legend=1 00:45:09.695 --rc geninfo_all_blocks=1 00:45:09.695 --rc geninfo_unexecuted_blocks=1 00:45:09.695 00:45:09.695 ' 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:45:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.695 --rc genhtml_branch_coverage=1 00:45:09.695 --rc genhtml_function_coverage=1 00:45:09.695 --rc genhtml_legend=1 00:45:09.695 --rc geninfo_all_blocks=1 00:45:09.695 --rc geninfo_unexecuted_blocks=1 00:45:09.695 00:45:09.695 ' 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:45:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.695 --rc genhtml_branch_coverage=1 00:45:09.695 --rc genhtml_function_coverage=1 00:45:09.695 --rc genhtml_legend=1 00:45:09.695 --rc geninfo_all_blocks=1 00:45:09.695 --rc geninfo_unexecuted_blocks=1 00:45:09.695 00:45:09.695 ' 00:45:09.695 17:42:10 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:45:09.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:09.695 --rc genhtml_branch_coverage=1 00:45:09.695 --rc genhtml_function_coverage=1 00:45:09.695 --rc genhtml_legend=1 00:45:09.695 --rc geninfo_all_blocks=1 00:45:09.695 --rc geninfo_unexecuted_blocks=1 00:45:09.695 00:45:09.695 ' 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:45:09.695 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73622 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:45:09.696 17:42:10 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73622 00:45:09.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73622 ']' 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:09.696 17:42:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:09.953 [2024-11-26 17:42:10.450866] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:09.953 [2024-11-26 17:42:10.451142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73622 ] 00:45:09.953 [2024-11-26 17:42:10.638972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:10.211 [2024-11-26 17:42:10.756544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:11.173 17:42:11 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:11.173 17:42:11 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:45:11.173 17:42:11 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:45:11.173 17:42:11 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:45:11.173 17:42:11 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:45:11.173 17:42:11 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:45:11.173 17:42:11 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:45:11.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:45:12.349 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:45:12.349 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:45:12.349 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:45:12.609 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:45:12.609 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.609 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:45:12.610 nvme0n1 00:45:12.610 nvme0n2 00:45:12.610 nvme0n3 00:45:12.610 nvme1n1 00:45:12.610 nvme2n1 00:45:12.610 nvme3n1 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.610 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:45:12.610 17:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:12.868 17:42:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.868 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:45:12.868 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:45:12.869 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e1c9ec50-712e-49bf-af81-e4224adec41f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1c9ec50-712e-49bf-af81-e4224adec41f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e9d91f4d-b7bf-4b2d-9b56-c1e4f181d6e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9d91f4d-b7bf-4b2d-9b56-c1e4f181d6e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5eb36910-6de5-42c0-92f6-47934412974f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb36910-6de5-42c0-92f6-47934412974f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0f826394-832c-4203-8103-86d8878d4b83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f826394-832c-4203-8103-86d8878d4b83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "461e347d-1ad5-4693-adf8-d93918892da3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "461e347d-1ad5-4693-adf8-d93918892da3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "baceaf2c-bc1a-4f55-a905-39f2a1ee5924"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "baceaf2c-bc1a-4f55-a905-39f2a1ee5924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:45:12.869 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:45:12.869 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:45:12.869 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:45:12.869 17:42:13 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73622 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73622 ']' 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73622 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73622 00:45:12.869 killing process with pid 73622 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73622' 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73622 00:45:12.869 17:42:13 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73622 00:45:15.401 17:42:16 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:45:15.401 17:42:16 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:45:15.401 17:42:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:45:15.401 17:42:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:15.401 17:42:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:15.401 ************************************ 00:45:15.401 START TEST bdev_hello_world 00:45:15.401 ************************************ 00:45:15.402 17:42:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:45:15.660 [2024-11-26 17:42:16.148166] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:15.660 [2024-11-26 17:42:16.148303] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73923 ] 00:45:15.660 [2024-11-26 17:42:16.335377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.919 [2024-11-26 17:42:16.448562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:16.487 [2024-11-26 17:42:16.904606] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:45:16.487 [2024-11-26 17:42:16.904656] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:45:16.487 [2024-11-26 17:42:16.904675] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:45:16.487 [2024-11-26 17:42:16.906780] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:45:16.487 [2024-11-26 17:42:16.907288] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:45:16.487 [2024-11-26 17:42:16.907309] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:45:16.487 [2024-11-26 17:42:16.907576] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:45:16.487 00:45:16.487 [2024-11-26 17:42:16.907599] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:45:17.423 00:45:17.423 real 0m1.994s 00:45:17.423 user 0m1.613s 00:45:17.423 sys 0m0.261s 00:45:17.423 17:42:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:17.423 17:42:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:45:17.423 ************************************ 00:45:17.423 END TEST bdev_hello_world 00:45:17.423 ************************************ 00:45:17.423 17:42:18 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:45:17.423 17:42:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:17.423 17:42:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:17.423 17:42:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:17.681 ************************************ 00:45:17.681 START TEST bdev_bounds 00:45:17.681 ************************************ 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73965 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:45:17.681 Process bdevio pid: 73965 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73965' 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73965 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73965 ']' 00:45:17.681 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:17.682 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:17.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:17.682 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:17.682 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:17.682 17:42:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:45:17.682 [2024-11-26 17:42:18.214737] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:17.682 [2024-11-26 17:42:18.214862] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73965 ] 00:45:17.941 [2024-11-26 17:42:18.395035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:17.941 [2024-11-26 17:42:18.517561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:17.941 [2024-11-26 17:42:18.517683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.941 [2024-11-26 17:42:18.517707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:18.510 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:18.510 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:45:18.510 17:42:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:45:18.510 I/O targets: 00:45:18.510 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:45:18.510 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:45:18.510 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:45:18.510 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:45:18.511 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:45:18.511 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:45:18.511 00:45:18.511 00:45:18.511 CUnit - A unit testing framework for C - Version 2.1-3 00:45:18.511 http://cunit.sourceforge.net/ 00:45:18.511 00:45:18.511 00:45:18.511 Suite: bdevio tests on: nvme3n1 00:45:18.511 Test: blockdev write read block ...passed 00:45:18.511 Test: blockdev write zeroes read block ...passed 00:45:18.511 Test: blockdev write zeroes read no split ...passed 00:45:18.771 Test: blockdev write zeroes read split ...passed 00:45:18.771 Test: blockdev write zeroes read split partial ...passed 00:45:18.771 Test: blockdev reset ...passed 00:45:18.771 Test: blockdev write read 8 blocks ...passed 00:45:18.771 Test: blockdev write read size > 128k ...passed 00:45:18.771 Test: blockdev write read invalid size ...passed 00:45:18.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:18.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:18.771 Test: blockdev write read max offset ...passed 00:45:18.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:18.771 Test: blockdev writev readv 8 blocks ...passed 00:45:18.771 Test: blockdev writev readv 30 x 1block ...passed 00:45:18.771 Test: blockdev writev readv block ...passed 00:45:18.771 Test: blockdev writev readv size > 128k ...passed 00:45:18.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:18.771 Test: blockdev comparev and writev ...passed 00:45:18.771 Test: blockdev nvme passthru rw ...passed 00:45:18.771 Test: blockdev nvme passthru vendor specific ...passed 00:45:18.771 Test: blockdev nvme admin passthru ...passed 00:45:18.771 Test: blockdev copy ...passed 00:45:18.771 Suite: bdevio tests on: nvme2n1 00:45:18.771 Test: blockdev write read block ...passed 00:45:18.771 Test: blockdev write zeroes read block ...passed 00:45:18.771 Test: blockdev write zeroes read no split ...passed 00:45:18.771 Test: blockdev write zeroes read split ...passed 00:45:18.771 Test: blockdev write zeroes read split partial ...passed 00:45:18.771 Test: blockdev reset ...passed 00:45:18.771 Test: blockdev write read 8 blocks ...passed 00:45:18.771 Test: blockdev write read size > 128k ...passed 00:45:18.771 Test: blockdev write read invalid size ...passed 00:45:18.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:18.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:18.771 Test: blockdev write read max offset ...passed 00:45:18.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:18.771 Test: blockdev writev readv 8 blocks ...passed 00:45:18.771 Test: blockdev writev readv 30 x 1block ...passed 00:45:18.771 Test: blockdev writev readv block ...passed 00:45:18.771 Test: blockdev writev readv size > 128k ...passed 00:45:18.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:18.771 Test: blockdev comparev and writev ...passed 00:45:18.771 Test: blockdev nvme passthru rw ...passed 00:45:18.771 Test: blockdev nvme passthru vendor specific ...passed 00:45:18.771 Test: blockdev nvme admin passthru ...passed 00:45:18.771 Test: blockdev copy ...passed 00:45:18.771 Suite: bdevio tests on: nvme1n1 00:45:18.771 Test: blockdev write read block ...passed 00:45:18.771 Test: blockdev write zeroes read block ...passed 00:45:18.771 Test: blockdev write zeroes read no split ...passed 00:45:18.771 Test: blockdev write zeroes read split ...passed 00:45:18.771 Test: blockdev write zeroes read split partial ...passed 00:45:18.771 Test: blockdev reset ...passed 00:45:18.771 Test: blockdev write read 8 blocks ...passed 00:45:18.771 Test: blockdev write read size > 128k ...passed 00:45:18.771 Test: blockdev write read invalid size ...passed 00:45:18.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:18.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:18.771 Test: blockdev write read max offset ...passed 00:45:18.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:18.771 Test: blockdev writev readv 8 blocks ...passed 00:45:18.771 Test: blockdev writev readv 30 x 1block ...passed 00:45:18.771 Test: blockdev writev readv block ...passed 00:45:18.771 Test: blockdev writev readv size > 128k ...passed 00:45:18.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:18.771 Test: blockdev comparev and writev ...passed 00:45:18.771 Test: blockdev nvme passthru rw ...passed 00:45:18.771 Test: blockdev nvme passthru vendor specific ...passed 00:45:18.771 Test: blockdev nvme admin passthru ...passed 00:45:18.771 Test: blockdev copy ...passed 00:45:18.771 Suite: bdevio tests on: nvme0n3 00:45:18.771 Test: blockdev write read block ...passed 00:45:18.771 Test: blockdev write zeroes read block ...passed 00:45:18.771 Test: blockdev write zeroes read no split ...passed 00:45:18.771 Test: blockdev write zeroes read split ...passed 00:45:19.029 Test: blockdev write zeroes read split partial ...passed 00:45:19.029 Test: blockdev reset ...passed 00:45:19.029 Test: blockdev write read 8 blocks ...passed 00:45:19.029 Test: blockdev write read size > 128k ...passed 00:45:19.029 Test: blockdev write read invalid size ...passed 00:45:19.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:19.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:19.030 Test: blockdev write read max offset ...passed 00:45:19.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:19.030 Test: blockdev writev readv 8 blocks ...passed 00:45:19.030 Test: blockdev writev readv 30 x 1block ...passed 00:45:19.030 Test: blockdev writev readv block ...passed 00:45:19.030 Test: blockdev writev readv size > 128k ...passed 00:45:19.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:19.030 Test: blockdev comparev and writev ...passed 00:45:19.030 Test: blockdev nvme passthru rw ...passed 00:45:19.030 Test: blockdev nvme passthru vendor specific ...passed 00:45:19.030 Test: blockdev nvme admin passthru ...passed 00:45:19.030 Test: blockdev copy ...passed 00:45:19.030 Suite: bdevio tests on: nvme0n2 00:45:19.030 Test: blockdev write read block ...passed 00:45:19.030 Test: blockdev write zeroes read block ...passed 00:45:19.030 Test: blockdev write zeroes read no split ...passed 00:45:19.030 Test: blockdev write zeroes read split ...passed 00:45:19.030 Test: blockdev write zeroes read split partial ...passed 00:45:19.030 Test: blockdev reset ...passed 00:45:19.030 Test: blockdev write read 8 blocks ...passed 00:45:19.030 Test: blockdev write read size > 128k ...passed 00:45:19.030 Test: blockdev write read invalid size ...passed 00:45:19.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:19.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:19.030 Test: blockdev write read max offset ...passed 00:45:19.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:19.030 Test: blockdev writev readv 8 blocks ...passed 00:45:19.030 Test: blockdev writev readv 30 x 1block ...passed 00:45:19.030 Test: blockdev writev readv block ...passed 00:45:19.030 Test: blockdev writev readv size > 128k ...passed 00:45:19.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:19.030 Test: blockdev comparev and writev ...passed 00:45:19.030 Test: blockdev nvme passthru rw ...passed 00:45:19.030 Test: blockdev nvme passthru vendor specific ...passed 00:45:19.030 Test: blockdev nvme admin passthru ...passed 00:45:19.030 Test: blockdev copy ...passed 00:45:19.030 Suite: bdevio tests on: nvme0n1 00:45:19.030 Test: blockdev write read block ...passed 00:45:19.030 Test: blockdev write zeroes read block ...passed 00:45:19.030 Test: blockdev write zeroes read no split ...passed 00:45:19.030 Test: blockdev write zeroes read split ...passed 00:45:19.030 Test: blockdev write zeroes read split partial ...passed 00:45:19.030 Test: blockdev reset ...passed 00:45:19.030 Test: blockdev write read 8 blocks ...passed 00:45:19.030 Test: blockdev write read size > 128k ...passed 00:45:19.030 Test: blockdev write read invalid size ...passed 00:45:19.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:19.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:19.030 Test: blockdev write read max offset ...passed 00:45:19.030 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:19.030 Test: blockdev writev readv 8 blocks ...passed 00:45:19.030 Test: blockdev writev readv 30 x 1block ...passed 00:45:19.030 Test: blockdev writev readv block ...passed 00:45:19.030 Test: blockdev writev readv size > 128k ...passed 00:45:19.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:19.030 Test: blockdev comparev and writev ...passed 00:45:19.030 Test: blockdev nvme passthru rw ...passed 00:45:19.030 Test: blockdev nvme passthru vendor specific ...passed 00:45:19.030 Test: blockdev nvme admin passthru ...passed 00:45:19.030 Test: blockdev copy ...passed 00:45:19.030 00:45:19.030 Run Summary: Type Total Ran Passed Failed Inactive 00:45:19.030 suites 6 6 n/a 0 0 00:45:19.030 tests 138 138 138 0 0 00:45:19.030 asserts 780 780 780 0 n/a 00:45:19.030 00:45:19.030 Elapsed time = 1.503 seconds 00:45:19.030 0 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73965 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73965 ']' 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73965 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:19.030 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73965 00:45:19.288 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:19.288 killing process with pid 73965 00:45:19.288 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:19.288 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73965' 00:45:19.288 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73965 00:45:19.288 17:42:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73965 00:45:20.226 17:42:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:45:20.226 00:45:20.226 real 0m2.790s 00:45:20.226 user 0m6.891s 00:45:20.226 sys 0m0.440s 00:45:20.226 ************************************ 00:45:20.226 END TEST bdev_bounds 00:45:20.226 ************************************ 00:45:20.226 17:42:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:20.226 17:42:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:45:20.486 17:42:20 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:45:20.486 17:42:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:45:20.486 17:42:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:20.486 17:42:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:20.486 ************************************ 00:45:20.486 START TEST bdev_nbd 00:45:20.486 ************************************ 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74028 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74028 /var/tmp/spdk-nbd.sock 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74028 ']' 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:45:20.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:45:20.486 17:42:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:45:20.486 [2024-11-26 17:42:21.098737] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:20.486 [2024-11-26 17:42:21.098879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:20.745 [2024-11-26 17:42:21.288655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.745 [2024-11-26 17:42:21.406253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:21.314 17:42:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:21.573 1+0 records in 00:45:21.573 1+0 records out 00:45:21.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680934 s, 6.0 MB/s 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.573 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:21.574 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:21.574 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:21.574 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:21.574 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:21.833 1+0 records in 00:45:21.833 1+0 records out 00:45:21.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572056 s, 7.2 MB/s 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:21.833 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:22.092 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:22.093 1+0 records in 00:45:22.093 1+0 records out 00:45:22.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759739 s, 5.4 MB/s 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:22.093 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:22.352 17:42:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:22.352 1+0 records in 00:45:22.352 1+0 records out 00:45:22.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602777 s, 6.8 MB/s 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:22.352 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:22.612 1+0 records in 00:45:22.612 1+0 records out 00:45:22.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562956 s, 7.3 MB/s 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:22.612 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:22.871 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:22.872 1+0 records in 00:45:22.872 1+0 records out 00:45:22.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837943 s, 4.9 MB/s 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:45:22.872 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:23.130 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd0", 00:45:23.130 "bdev_name": "nvme0n1" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd1", 00:45:23.130 "bdev_name": "nvme0n2" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd2", 00:45:23.130 "bdev_name": "nvme0n3" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd3", 00:45:23.130 "bdev_name": "nvme1n1" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd4", 00:45:23.130 "bdev_name": "nvme2n1" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd5", 00:45:23.130 "bdev_name": "nvme3n1" 00:45:23.130 } 00:45:23.130 ]' 00:45:23.130 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:45:23.130 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:45:23.130 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd0", 00:45:23.130 "bdev_name": "nvme0n1" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd1", 00:45:23.130 "bdev_name": "nvme0n2" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd2", 00:45:23.130 "bdev_name": "nvme0n3" 00:45:23.130 }, 00:45:23.130 { 00:45:23.130 "nbd_device": "/dev/nbd3", 00:45:23.131 "bdev_name": "nvme1n1" 00:45:23.131 }, 00:45:23.131 { 00:45:23.131 "nbd_device": "/dev/nbd4", 00:45:23.131 "bdev_name": "nvme2n1" 00:45:23.131 }, 00:45:23.131 { 00:45:23.131 "nbd_device": "/dev/nbd5", 00:45:23.131 "bdev_name": "nvme3n1" 00:45:23.131 } 00:45:23.131 ]' 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:23.131 17:42:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:23.390 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:23.650 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:23.909 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:24.168 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:24.441 17:42:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:24.714 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:24.973 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:45:24.974 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:24.974 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:24.974 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:45:24.974 /dev/nbd0 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:25.234 1+0 records in 00:45:25.234 1+0 records out 00:45:25.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670105 s, 6.1 MB/s 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:25.234 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:45:25.234 /dev/nbd1 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:25.494 1+0 records in 00:45:25.494 1+0 records out 00:45:25.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682443 s, 6.0 MB/s 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:25.494 17:42:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:45:25.494 /dev/nbd10 00:45:25.753 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:45:25.753 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:45:25.753 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:45:25.753 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:25.754 1+0 records in 00:45:25.754 1+0 records out 00:45:25.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529436 s, 7.7 MB/s 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:25.754 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:45:25.754 /dev/nbd11 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:26.013 1+0 records in 00:45:26.013 1+0 records out 00:45:26.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760576 s, 5.4 MB/s 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:26.013 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:45:26.013 /dev/nbd12 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:26.273 1+0 records in 00:45:26.273 1+0 records out 00:45:26.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799023 s, 5.1 MB/s 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:26.273 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:45:26.273 /dev/nbd13 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:26.532 1+0 records in 00:45:26.532 1+0 records out 00:45:26.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885679 s, 4.6 MB/s 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:26.532 17:42:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd0", 00:45:26.532 "bdev_name": "nvme0n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd1", 00:45:26.532 "bdev_name": "nvme0n2" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd10", 00:45:26.532 "bdev_name": "nvme0n3" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd11", 00:45:26.532 "bdev_name": "nvme1n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd12", 00:45:26.532 "bdev_name": "nvme2n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd13", 00:45:26.532 "bdev_name": "nvme3n1" 00:45:26.532 } 00:45:26.532 ]' 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:26.532 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd0", 00:45:26.532 "bdev_name": "nvme0n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd1", 00:45:26.532 "bdev_name": "nvme0n2" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd10", 00:45:26.532 "bdev_name": "nvme0n3" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd11", 00:45:26.532 "bdev_name": "nvme1n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.532 "nbd_device": "/dev/nbd12", 00:45:26.532 "bdev_name": "nvme2n1" 00:45:26.532 }, 00:45:26.532 { 00:45:26.533 "nbd_device": "/dev/nbd13", 00:45:26.533 "bdev_name": "nvme3n1" 00:45:26.533 } 00:45:26.533 ]' 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:45:26.792 /dev/nbd1 00:45:26.792 /dev/nbd10 00:45:26.792 /dev/nbd11 00:45:26.792 /dev/nbd12 00:45:26.792 /dev/nbd13' 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:45:26.792 /dev/nbd1 00:45:26.792 /dev/nbd10 00:45:26.792 /dev/nbd11 00:45:26.792 /dev/nbd12 00:45:26.792 /dev/nbd13' 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:45:26.792 256+0 records in 00:45:26.792 256+0 records out 00:45:26.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013417 s, 78.2 MB/s 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:45:26.792 256+0 records in 00:45:26.792 256+0 records out 00:45:26.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126235 s, 8.3 MB/s 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:26.792 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:45:27.052 256+0 records in 00:45:27.052 256+0 records out 00:45:27.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130817 s, 8.0 MB/s 00:45:27.052 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:27.052 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:45:27.052 256+0 records in 00:45:27.052 256+0 records out 00:45:27.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127029 s, 8.3 MB/s 00:45:27.052 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:27.052 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:45:27.311 256+0 records in 00:45:27.311 256+0 records out 00:45:27.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131367 s, 8.0 MB/s 00:45:27.311 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:27.311 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:45:27.311 256+0 records in 00:45:27.311 256+0 records out 00:45:27.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124419 s, 8.4 MB/s 00:45:27.311 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:45:27.311 17:42:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:45:27.571 256+0 records in 00:45:27.571 256+0 records out 00:45:27.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156239 s, 6.7 MB/s 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:45:27.571 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:27.572 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:27.831 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:28.091 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:28.351 17:42:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:28.610 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:28.870 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:45:29.129 17:42:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:45:29.388 malloc_lvol_verify 00:45:29.388 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:45:29.646 828b220a-f2ce-4f59-b65e-ee9cfbdd472c 00:45:29.646 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:45:29.905 7386827d-6055-49aa-af18-906a2af0dca1 00:45:29.905 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:45:30.164 /dev/nbd0 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:45:30.164 mke2fs 1.47.0 (5-Feb-2023) 00:45:30.164 Discarding device blocks: 0/4096 done 00:45:30.164 Creating filesystem with 4096 1k blocks and 1024 inodes 00:45:30.164 00:45:30.164 Allocating group tables: 0/1 done 00:45:30.164 Writing inode tables: 0/1 done 00:45:30.164 Creating journal (1024 blocks): done 00:45:30.164 Writing superblocks and filesystem accounting information: 0/1 done 00:45:30.164 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:30.164 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:45:30.422 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74028 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74028 ']' 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74028 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74028 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:30.423 killing process with pid 74028 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74028' 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74028 00:45:30.423 17:42:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74028 00:45:31.800 17:42:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:45:31.800 00:45:31.800 real 0m11.164s 00:45:31.800 user 0m14.166s 00:45:31.800 sys 0m4.944s 00:45:31.800 17:42:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:31.800 17:42:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:45:31.800 ************************************ 00:45:31.800 END TEST bdev_nbd 00:45:31.800 ************************************ 00:45:31.800 17:42:32 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:45:31.800 17:42:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:45:31.800 17:42:32 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:45:31.800 17:42:32 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:45:31.800 17:42:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:45:31.800 17:42:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:31.800 17:42:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:31.800 ************************************ 00:45:31.800 START TEST bdev_fio 00:45:31.800 ************************************ 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:45:31.800 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:45:31.800 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:45:31.801 ************************************ 00:45:31.801 START TEST bdev_fio_rw_verify 00:45:31.801 ************************************ 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:45:31.801 17:42:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:45:32.060 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:45:32.060 fio-3.35 00:45:32.060 Starting 6 threads 00:45:44.261 00:45:44.261 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74436: Tue Nov 26 17:42:43 2024 00:45:44.261 read: IOPS=33.4k, BW=130MiB/s (137MB/s)(1303MiB/10001msec) 00:45:44.261 slat (usec): min=2, max=1681, avg= 6.71, stdev= 6.66 00:45:44.261 clat (usec): min=110, max=3962, avg=555.73, stdev=224.30 00:45:44.261 lat (usec): min=112, max=3968, avg=562.44, stdev=225.27 00:45:44.261 clat percentiles (usec): 00:45:44.261 | 50.000th=[ 562], 99.000th=[ 1172], 99.900th=[ 2008], 99.990th=[ 3621], 00:45:44.261 | 99.999th=[ 3851] 00:45:44.261 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(1313MiB/10001msec); 0 zone resets 00:45:44.261 slat (usec): min=10, max=7178, avg=23.87, stdev=33.93 00:45:44.261 clat (usec): min=79, max=7978, avg=645.01, stdev=237.24 00:45:44.261 lat (usec): min=96, max=8018, avg=668.88, stdev=242.38 00:45:44.261 clat percentiles (usec): 00:45:44.261 | 50.000th=[ 635], 99.000th=[ 1385], 99.900th=[ 1975], 99.990th=[ 2704], 00:45:44.261 | 99.999th=[ 3884] 00:45:44.261 bw ( KiB/s): min=112757, max=163336, per=100.00%, avg=134739.84, stdev=2382.66, samples=114 00:45:44.261 iops : min=28188, max=40834, avg=33684.63, stdev=595.70, samples=114 00:45:44.261 lat (usec) : 100=0.01%, 250=5.54%, 500=27.22%, 750=46.56%, 1000=16.15% 00:45:44.261 lat (msec) : 2=4.44%, 4=0.09%, 10=0.01% 00:45:44.261 cpu : usr=56.54%, sys=29.01%, ctx=8260, majf=0, minf=27637 00:45:44.261 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.261 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.261 issued rwts: total=333631,336139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.261 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:44.261 00:45:44.261 Run status group 0 (all jobs): 00:45:44.261 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=1303MiB (1367MB), run=10001-10001msec 00:45:44.261 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1313MiB (1377MB), run=10001-10001msec 00:45:44.521 ----------------------------------------------------- 00:45:44.521 Suppressions used: 00:45:44.521 count bytes template 00:45:44.521 6 48 /usr/src/fio/parse.c 00:45:44.521 2241 215136 /usr/src/fio/iolog.c 00:45:44.521 1 8 libtcmalloc_minimal.so 00:45:44.521 1 904 libcrypto.so 00:45:44.521 ----------------------------------------------------- 00:45:44.521 00:45:44.521 00:45:44.521 real 0m12.700s 00:45:44.521 user 0m36.074s 00:45:44.521 sys 0m17.852s 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:45:44.521 ************************************ 00:45:44.521 END TEST bdev_fio_rw_verify 00:45:44.521 ************************************ 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:45:44.521 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "e1c9ec50-712e-49bf-af81-e4224adec41f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1c9ec50-712e-49bf-af81-e4224adec41f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "e9d91f4d-b7bf-4b2d-9b56-c1e4f181d6e7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9d91f4d-b7bf-4b2d-9b56-c1e4f181d6e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5eb36910-6de5-42c0-92f6-47934412974f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5eb36910-6de5-42c0-92f6-47934412974f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0f826394-832c-4203-8103-86d8878d4b83"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0f826394-832c-4203-8103-86d8878d4b83",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "461e347d-1ad5-4693-adf8-d93918892da3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "461e347d-1ad5-4693-adf8-d93918892da3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "baceaf2c-bc1a-4f55-a905-39f2a1ee5924"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "baceaf2c-bc1a-4f55-a905-39f2a1ee5924",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:45:44.522 /home/vagrant/spdk_repo/spdk 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:45:44.522 00:45:44.522 real 0m12.938s 00:45:44.522 user 0m36.191s 00:45:44.522 sys 0m17.977s 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:44.522 17:42:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:45:44.522 ************************************ 00:45:44.522 END TEST bdev_fio 00:45:44.522 ************************************ 00:45:44.782 17:42:45 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:45:44.782 17:42:45 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:45:44.782 17:42:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:45:44.782 17:42:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:44.782 17:42:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:44.782 ************************************ 00:45:44.782 START TEST bdev_verify 00:45:44.782 ************************************ 00:45:44.782 17:42:45 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:45:44.782 [2024-11-26 17:42:45.327789] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:44.782 [2024-11-26 17:42:45.327905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74623 ] 00:45:45.041 [2024-11-26 17:42:45.510637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:45.041 [2024-11-26 17:42:45.625238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:45.041 [2024-11-26 17:42:45.625269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:45.608 Running I/O for 5 seconds... 00:45:47.553 25536.00 IOPS, 99.75 MiB/s [2024-11-26T17:42:49.625Z] 23776.00 IOPS, 92.88 MiB/s [2024-11-26T17:42:50.562Z] 23584.00 IOPS, 92.12 MiB/s [2024-11-26T17:42:51.500Z] 23432.00 IOPS, 91.53 MiB/s [2024-11-26T17:42:51.500Z] 23212.80 IOPS, 90.67 MiB/s 00:45:50.806 Latency(us) 00:45:50.806 [2024-11-26T17:42:51.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:50.806 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0x80000 00:45:50.806 nvme0n1 : 5.05 1696.69 6.63 0.00 0.00 75319.74 9159.25 71589.53 00:45:50.806 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x80000 length 0x80000 00:45:50.806 nvme0n1 : 5.03 1883.38 7.36 0.00 0.00 67855.42 6264.08 66115.03 00:45:50.806 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0x80000 00:45:50.806 nvme0n2 : 5.02 1681.62 6.57 0.00 0.00 75867.53 17370.99 63588.34 00:45:50.806 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x80000 length 0x80000 00:45:50.806 nvme0n2 : 5.03 1857.39 7.26 0.00 0.00 68685.38 7001.03 66536.15 00:45:50.806 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0x80000 00:45:50.806 nvme0n3 : 5.07 1692.93 6.61 0.00 0.00 75239.17 11685.94 62325.00 00:45:50.806 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x80000 length 0x80000 00:45:50.806 nvme0n3 : 5.03 1856.79 7.25 0.00 0.00 68596.71 13001.92 68641.72 00:45:50.806 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0xa0000 00:45:50.806 nvme1n1 : 5.04 1676.37 6.55 0.00 0.00 75869.42 15160.13 65693.92 00:45:50.806 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0xa0000 length 0xa0000 00:45:50.806 nvme1n1 : 5.05 1849.80 7.23 0.00 0.00 68751.03 11212.18 69062.84 00:45:50.806 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0x20000 00:45:50.806 nvme2n1 : 5.07 1690.47 6.60 0.00 0.00 75121.93 8948.69 71168.41 00:45:50.806 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x20000 length 0x20000 00:45:50.806 nvme2n1 : 5.05 1849.29 7.22 0.00 0.00 68657.87 8843.41 72852.87 00:45:50.806 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0x0 length 0xbd0bd 00:45:50.806 nvme3n1 : 5.07 2536.71 9.91 0.00 0.00 49907.34 5079.70 71168.41 00:45:50.806 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:50.806 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:45:50.806 nvme3n1 : 5.08 2779.90 10.86 0.00 0.00 45556.29 4474.35 56008.28 00:45:50.806 [2024-11-26T17:42:51.500Z] =================================================================================================================== 00:45:50.806 [2024-11-26T17:42:51.500Z] Total : 23051.33 90.04 0.00 0.00 66223.98 4474.35 72852.87 00:45:51.744 00:45:51.744 real 0m7.145s 00:45:51.744 user 0m10.939s 00:45:51.744 sys 0m1.996s 00:45:51.744 17:42:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:51.744 17:42:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:45:51.744 ************************************ 00:45:51.744 END TEST bdev_verify 00:45:51.744 ************************************ 00:45:52.003 17:42:52 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:52.003 17:42:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:45:52.003 17:42:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:52.003 17:42:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:45:52.003 ************************************ 00:45:52.003 START TEST bdev_verify_big_io 00:45:52.003 ************************************ 00:45:52.003 17:42:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:45:52.003 [2024-11-26 17:42:52.555165] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:45:52.003 [2024-11-26 17:42:52.555301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74724 ] 00:45:52.263 [2024-11-26 17:42:52.742552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:52.263 [2024-11-26 17:42:52.857698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:52.263 [2024-11-26 17:42:52.857714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:52.831 Running I/O for 5 seconds... 00:45:57.066 2432.00 IOPS, 152.00 MiB/s [2024-11-26T17:42:59.151Z] 2280.00 IOPS, 142.50 MiB/s [2024-11-26T17:42:59.723Z] 2597.33 IOPS, 162.33 MiB/s 00:45:59.029 Latency(us) 00:45:59.029 [2024-11-26T17:42:59.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:59.030 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0x8000 00:45:59.030 nvme0n1 : 5.66 107.51 6.72 0.00 0.00 1142909.71 45690.96 1300402.69 00:45:59.030 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x8000 length 0x8000 00:45:59.030 nvme0n1 : 5.49 201.16 12.57 0.00 0.00 619896.52 15160.13 828754.04 00:45:59.030 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0x8000 00:45:59.030 nvme0n2 : 5.71 123.35 7.71 0.00 0.00 944501.99 4763.86 990462.15 00:45:59.030 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x8000 length 0x8000 00:45:59.030 nvme0n2 : 5.51 227.99 14.25 0.00 0.00 538638.09 18529.05 636725.67 00:45:59.030 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0x8000 00:45:59.030 nvme0n3 : 5.75 108.48 6.78 0.00 0.00 1039624.78 43164.27 2466048.62 00:45:59.030 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x8000 length 0x8000 00:45:59.030 nvme0n3 : 5.49 232.97 14.56 0.00 0.00 526801.12 14844.30 549133.78 00:45:59.030 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0xa000 00:45:59.030 nvme1n1 : 5.84 128.76 8.05 0.00 0.00 845473.20 28425.25 2142632.40 00:45:59.030 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0xa000 length 0xa000 00:45:59.030 nvme1n1 : 5.51 222.07 13.88 0.00 0.00 535854.21 15686.53 859074.31 00:45:59.030 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0x2000 00:45:59.030 nvme2n1 : 6.00 165.31 10.33 0.00 0.00 635248.29 7685.35 2263913.48 00:45:59.030 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x2000 length 0x2000 00:45:59.030 nvme2n1 : 5.52 208.68 13.04 0.00 0.00 567704.82 7580.07 1179121.61 00:45:59.030 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0x0 length 0xbd0b 00:45:59.030 nvme3n1 : 6.13 263.47 16.47 0.00 0.00 390501.15 3132.04 1320616.20 00:45:59.030 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:45:59.030 Verification LBA range: start 0xbd0b length 0xbd0b 00:45:59.030 nvme3n1 : 5.52 249.15 15.57 0.00 0.00 468231.62 5790.33 811909.45 00:45:59.030 [2024-11-26T17:42:59.724Z] =================================================================================================================== 00:45:59.030 [2024-11-26T17:42:59.724Z] Total : 2238.91 139.93 0.00 0.00 622235.59 3132.04 2466048.62 00:46:00.405 00:46:00.405 real 0m8.591s 00:46:00.405 user 0m15.548s 00:46:00.405 sys 0m0.651s 00:46:00.405 17:43:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:00.405 17:43:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:46:00.405 ************************************ 00:46:00.405 END TEST bdev_verify_big_io 00:46:00.405 ************************************ 00:46:00.663 17:43:01 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:00.663 17:43:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:46:00.663 17:43:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:00.663 17:43:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:46:00.663 ************************************ 00:46:00.663 START TEST bdev_write_zeroes 00:46:00.663 ************************************ 00:46:00.663 17:43:01 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:00.663 [2024-11-26 17:43:01.229287] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:00.663 [2024-11-26 17:43:01.229421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74835 ] 00:46:00.921 [2024-11-26 17:43:01.419402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:00.921 [2024-11-26 17:43:01.534756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:01.487 Running I/O for 1 seconds... 00:46:02.422 36736.00 IOPS, 143.50 MiB/s 00:46:02.422 Latency(us) 00:46:02.422 [2024-11-26T17:43:03.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:02.422 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme0n1 : 1.02 5392.35 21.06 0.00 0.00 23716.49 8580.22 33268.07 00:46:02.422 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme0n2 : 1.02 5385.68 21.04 0.00 0.00 23730.73 8527.58 32846.96 00:46:02.422 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme0n3 : 1.02 5379.96 21.02 0.00 0.00 23739.72 8580.22 32846.96 00:46:02.422 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme1n1 : 1.02 5374.51 20.99 0.00 0.00 23749.84 8632.85 33057.52 00:46:02.422 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme2n1 : 1.03 5368.29 20.97 0.00 0.00 23763.28 8632.85 33057.52 00:46:02.422 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:46:02.422 nvme3n1 : 1.03 9732.12 38.02 0.00 0.00 13068.42 4974.42 33057.52 00:46:02.422 [2024-11-26T17:43:03.116Z] =================================================================================================================== 00:46:02.422 [2024-11-26T17:43:03.116Z] Total : 36632.91 143.10 0.00 0.00 20892.43 4974.42 33268.07 00:46:03.801 00:46:03.801 real 0m3.072s 00:46:03.801 user 0m2.347s 00:46:03.801 sys 0m0.543s 00:46:03.801 17:43:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:03.801 17:43:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:46:03.801 ************************************ 00:46:03.801 END TEST bdev_write_zeroes 00:46:03.801 ************************************ 00:46:03.801 17:43:04 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:03.801 17:43:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:46:03.801 17:43:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:03.801 17:43:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:46:03.801 ************************************ 00:46:03.801 START TEST bdev_json_nonenclosed 00:46:03.801 ************************************ 00:46:03.801 17:43:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:03.801 [2024-11-26 17:43:04.357534] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:03.801 [2024-11-26 17:43:04.357648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74895 ] 00:46:04.060 [2024-11-26 17:43:04.534527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.060 [2024-11-26 17:43:04.646210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:04.060 [2024-11-26 17:43:04.646302] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:46:04.060 [2024-11-26 17:43:04.646325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:04.060 [2024-11-26 17:43:04.646336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:04.319 00:46:04.319 real 0m0.621s 00:46:04.319 user 0m0.369s 00:46:04.319 sys 0m0.149s 00:46:04.319 17:43:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:04.319 17:43:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:46:04.319 ************************************ 00:46:04.319 END TEST bdev_json_nonenclosed 00:46:04.319 ************************************ 00:46:04.319 17:43:04 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:04.319 17:43:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:46:04.319 17:43:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:04.319 17:43:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:46:04.319 ************************************ 00:46:04.319 START TEST bdev_json_nonarray 00:46:04.319 ************************************ 00:46:04.319 17:43:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:46:04.578 [2024-11-26 17:43:05.054926] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:04.578 [2024-11-26 17:43:05.055047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74926 ] 00:46:04.578 [2024-11-26 17:43:05.232013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:04.838 [2024-11-26 17:43:05.336747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:04.838 [2024-11-26 17:43:05.336846] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:46:04.838 [2024-11-26 17:43:05.336869] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:46:04.838 [2024-11-26 17:43:05.336882] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:05.097 00:46:05.097 real 0m0.613s 00:46:05.097 user 0m0.365s 00:46:05.097 sys 0m0.143s 00:46:05.097 17:43:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:05.097 17:43:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:46:05.097 ************************************ 00:46:05.097 END TEST bdev_json_nonarray 00:46:05.097 ************************************ 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:46:05.097 17:43:05 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:05.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:06.604 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:46:06.604 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.875 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.875 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:46:11.875 00:46:11.875 real 1m1.768s 00:46:11.875 user 1m35.800s 00:46:11.875 sys 0m43.830s 00:46:11.875 17:43:11 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:11.875 17:43:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:46:11.875 ************************************ 00:46:11.875 END TEST blockdev_xnvme 00:46:11.875 ************************************ 00:46:11.875 17:43:11 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:46:11.875 17:43:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:11.875 17:43:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:11.875 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:46:11.875 ************************************ 00:46:11.875 START TEST ublk 00:46:11.875 ************************************ 00:46:11.875 17:43:11 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:46:11.875 * Looking for test storage... 00:46:11.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:11.875 17:43:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:11.875 17:43:12 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:46:11.875 17:43:12 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:46:11.875 17:43:12 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:46:11.875 17:43:12 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:11.875 17:43:12 ublk -- scripts/common.sh@344 -- # case "$op" in 00:46:11.875 17:43:12 ublk -- scripts/common.sh@345 -- # : 1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:11.875 17:43:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:11.875 17:43:12 ublk -- scripts/common.sh@365 -- # decimal 1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@353 -- # local d=1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:11.875 17:43:12 ublk -- scripts/common.sh@355 -- # echo 1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:46:11.875 17:43:12 ublk -- scripts/common.sh@366 -- # decimal 2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@353 -- # local d=2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:11.875 17:43:12 ublk -- scripts/common.sh@355 -- # echo 2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:46:11.875 17:43:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:11.875 17:43:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:11.875 17:43:12 ublk -- scripts/common.sh@368 -- # return 0 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.875 --rc genhtml_branch_coverage=1 00:46:11.875 --rc genhtml_function_coverage=1 00:46:11.875 --rc genhtml_legend=1 00:46:11.875 --rc geninfo_all_blocks=1 00:46:11.875 --rc geninfo_unexecuted_blocks=1 00:46:11.875 00:46:11.875 ' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.875 --rc genhtml_branch_coverage=1 00:46:11.875 --rc genhtml_function_coverage=1 00:46:11.875 --rc genhtml_legend=1 00:46:11.875 --rc geninfo_all_blocks=1 00:46:11.875 --rc geninfo_unexecuted_blocks=1 00:46:11.875 00:46:11.875 ' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.875 --rc genhtml_branch_coverage=1 00:46:11.875 --rc genhtml_function_coverage=1 00:46:11.875 --rc genhtml_legend=1 00:46:11.875 --rc geninfo_all_blocks=1 00:46:11.875 --rc geninfo_unexecuted_blocks=1 00:46:11.875 00:46:11.875 ' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:11.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:11.875 --rc genhtml_branch_coverage=1 00:46:11.875 --rc genhtml_function_coverage=1 00:46:11.875 --rc genhtml_legend=1 00:46:11.875 --rc geninfo_all_blocks=1 00:46:11.875 --rc geninfo_unexecuted_blocks=1 00:46:11.875 00:46:11.875 ' 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:46:11.875 17:43:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:46:11.875 17:43:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:46:11.875 17:43:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:46:11.875 17:43:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:46:11.875 17:43:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:46:11.875 17:43:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:46:11.875 17:43:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:46:11.875 17:43:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:46:11.875 17:43:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:11.875 17:43:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:46:11.875 ************************************ 00:46:11.875 START TEST test_save_ublk_config 00:46:11.875 ************************************ 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75224 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75224 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75224 ']' 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:11.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:11.875 17:43:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:11.875 [2024-11-26 17:43:12.312010] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:11.875 [2024-11-26 17:43:12.312344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75224 ] 00:46:11.875 [2024-11-26 17:43:12.497350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:12.202 [2024-11-26 17:43:12.643861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.139 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:13.139 [2024-11-26 17:43:13.733531] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:13.139 [2024-11-26 17:43:13.734943] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:13.139 malloc0 00:46:13.398 [2024-11-26 17:43:13.837701] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:46:13.398 [2024-11-26 17:43:13.837834] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:46:13.398 [2024-11-26 17:43:13.837849] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:46:13.398 [2024-11-26 17:43:13.837859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:46:13.398 [2024-11-26 17:43:13.846683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:13.398 [2024-11-26 17:43:13.846715] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:13.398 [2024-11-26 17:43:13.853539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:13.398 [2024-11-26 17:43:13.853663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:46:13.398 [2024-11-26 17:43:13.870539] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:46:13.398 0 00:46:13.398 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.398 17:43:13 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:46:13.398 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:13.398 17:43:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:13.657 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.657 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:46:13.657 "subsystems": [ 00:46:13.657 { 00:46:13.657 "subsystem": "fsdev", 00:46:13.657 "config": [ 00:46:13.657 { 00:46:13.657 "method": "fsdev_set_opts", 00:46:13.657 "params": { 00:46:13.657 "fsdev_io_pool_size": 65535, 00:46:13.657 "fsdev_io_cache_size": 256 00:46:13.657 } 00:46:13.657 } 00:46:13.657 ] 00:46:13.657 }, 00:46:13.657 { 00:46:13.657 "subsystem": "keyring", 00:46:13.657 "config": [] 00:46:13.657 }, 00:46:13.657 { 00:46:13.657 "subsystem": "iobuf", 00:46:13.657 "config": [ 00:46:13.657 { 00:46:13.657 "method": "iobuf_set_options", 00:46:13.657 "params": { 00:46:13.657 "small_pool_count": 8192, 00:46:13.657 "large_pool_count": 1024, 00:46:13.657 "small_bufsize": 8192, 00:46:13.657 "large_bufsize": 135168, 00:46:13.657 "enable_numa": false 00:46:13.657 } 00:46:13.657 } 00:46:13.657 ] 00:46:13.657 }, 00:46:13.657 { 00:46:13.657 "subsystem": "sock", 00:46:13.657 "config": [ 00:46:13.657 { 00:46:13.657 "method": "sock_set_default_impl", 00:46:13.657 "params": { 00:46:13.657 "impl_name": "posix" 00:46:13.657 } 00:46:13.657 }, 00:46:13.657 { 00:46:13.657 "method": "sock_impl_set_options", 00:46:13.657 "params": { 00:46:13.657 "impl_name": "ssl", 00:46:13.657 "recv_buf_size": 4096, 00:46:13.657 "send_buf_size": 4096, 00:46:13.657 "enable_recv_pipe": true, 00:46:13.657 "enable_quickack": false, 00:46:13.657 "enable_placement_id": 0, 00:46:13.657 "enable_zerocopy_send_server": true, 00:46:13.657 "enable_zerocopy_send_client": false, 00:46:13.657 "zerocopy_threshold": 0, 00:46:13.658 "tls_version": 0, 00:46:13.658 "enable_ktls": false 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "sock_impl_set_options", 00:46:13.658 "params": { 00:46:13.658 "impl_name": "posix", 00:46:13.658 "recv_buf_size": 2097152, 00:46:13.658 "send_buf_size": 2097152, 00:46:13.658 "enable_recv_pipe": true, 00:46:13.658 "enable_quickack": false, 00:46:13.658 "enable_placement_id": 0, 00:46:13.658 "enable_zerocopy_send_server": true, 00:46:13.658 "enable_zerocopy_send_client": false, 00:46:13.658 "zerocopy_threshold": 0, 00:46:13.658 "tls_version": 0, 00:46:13.658 "enable_ktls": false 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "vmd", 00:46:13.658 "config": [] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "accel", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "accel_set_options", 00:46:13.658 "params": { 00:46:13.658 "small_cache_size": 128, 00:46:13.658 "large_cache_size": 16, 00:46:13.658 "task_count": 2048, 00:46:13.658 "sequence_count": 2048, 00:46:13.658 "buf_count": 2048 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "bdev", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "bdev_set_options", 00:46:13.658 "params": { 00:46:13.658 "bdev_io_pool_size": 65535, 00:46:13.658 "bdev_io_cache_size": 256, 00:46:13.658 "bdev_auto_examine": true, 00:46:13.658 "iobuf_small_cache_size": 128, 00:46:13.658 "iobuf_large_cache_size": 16 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_raid_set_options", 00:46:13.658 "params": { 00:46:13.658 "process_window_size_kb": 1024, 00:46:13.658 "process_max_bandwidth_mb_sec": 0 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_iscsi_set_options", 00:46:13.658 "params": { 00:46:13.658 "timeout_sec": 30 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_nvme_set_options", 00:46:13.658 "params": { 00:46:13.658 "action_on_timeout": "none", 00:46:13.658 "timeout_us": 0, 00:46:13.658 "timeout_admin_us": 0, 00:46:13.658 "keep_alive_timeout_ms": 10000, 00:46:13.658 "arbitration_burst": 0, 00:46:13.658 "low_priority_weight": 0, 00:46:13.658 "medium_priority_weight": 0, 00:46:13.658 "high_priority_weight": 0, 00:46:13.658 "nvme_adminq_poll_period_us": 10000, 00:46:13.658 "nvme_ioq_poll_period_us": 0, 00:46:13.658 "io_queue_requests": 0, 00:46:13.658 "delay_cmd_submit": true, 00:46:13.658 "transport_retry_count": 4, 00:46:13.658 "bdev_retry_count": 3, 00:46:13.658 "transport_ack_timeout": 0, 00:46:13.658 "ctrlr_loss_timeout_sec": 0, 00:46:13.658 "reconnect_delay_sec": 0, 00:46:13.658 "fast_io_fail_timeout_sec": 0, 00:46:13.658 "disable_auto_failback": false, 00:46:13.658 "generate_uuids": false, 00:46:13.658 "transport_tos": 0, 00:46:13.658 "nvme_error_stat": false, 00:46:13.658 "rdma_srq_size": 0, 00:46:13.658 "io_path_stat": false, 00:46:13.658 "allow_accel_sequence": false, 00:46:13.658 "rdma_max_cq_size": 0, 00:46:13.658 "rdma_cm_event_timeout_ms": 0, 00:46:13.658 "dhchap_digests": [ 00:46:13.658 "sha256", 00:46:13.658 "sha384", 00:46:13.658 "sha512" 00:46:13.658 ], 00:46:13.658 "dhchap_dhgroups": [ 00:46:13.658 "null", 00:46:13.658 "ffdhe2048", 00:46:13.658 "ffdhe3072", 00:46:13.658 "ffdhe4096", 00:46:13.658 "ffdhe6144", 00:46:13.658 "ffdhe8192" 00:46:13.658 ] 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_nvme_set_hotplug", 00:46:13.658 "params": { 00:46:13.658 "period_us": 100000, 00:46:13.658 "enable": false 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_malloc_create", 00:46:13.658 "params": { 00:46:13.658 "name": "malloc0", 00:46:13.658 "num_blocks": 8192, 00:46:13.658 "block_size": 4096, 00:46:13.658 "physical_block_size": 4096, 00:46:13.658 "uuid": "7c7327a1-5c4a-4573-9f6c-2f31ac2072db", 00:46:13.658 "optimal_io_boundary": 0, 00:46:13.658 "md_size": 0, 00:46:13.658 "dif_type": 0, 00:46:13.658 "dif_is_head_of_md": false, 00:46:13.658 "dif_pi_format": 0 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "bdev_wait_for_examine" 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "scsi", 00:46:13.658 "config": null 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "scheduler", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "framework_set_scheduler", 00:46:13.658 "params": { 00:46:13.658 "name": "static" 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "vhost_scsi", 00:46:13.658 "config": [] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "vhost_blk", 00:46:13.658 "config": [] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "ublk", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "ublk_create_target", 00:46:13.658 "params": { 00:46:13.658 "cpumask": "1" 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "ublk_start_disk", 00:46:13.658 "params": { 00:46:13.658 "bdev_name": "malloc0", 00:46:13.658 "ublk_id": 0, 00:46:13.658 "num_queues": 1, 00:46:13.658 "queue_depth": 128 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "nbd", 00:46:13.658 "config": [] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "nvmf", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "nvmf_set_config", 00:46:13.658 "params": { 00:46:13.658 "discovery_filter": "match_any", 00:46:13.658 "admin_cmd_passthru": { 00:46:13.658 "identify_ctrlr": false 00:46:13.658 }, 00:46:13.658 "dhchap_digests": [ 00:46:13.658 "sha256", 00:46:13.658 "sha384", 00:46:13.658 "sha512" 00:46:13.658 ], 00:46:13.658 "dhchap_dhgroups": [ 00:46:13.658 "null", 00:46:13.658 "ffdhe2048", 00:46:13.658 "ffdhe3072", 00:46:13.658 "ffdhe4096", 00:46:13.658 "ffdhe6144", 00:46:13.658 "ffdhe8192" 00:46:13.658 ] 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "nvmf_set_max_subsystems", 00:46:13.658 "params": { 00:46:13.658 "max_subsystems": 1024 00:46:13.658 } 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "method": "nvmf_set_crdt", 00:46:13.658 "params": { 00:46:13.658 "crdt1": 0, 00:46:13.658 "crdt2": 0, 00:46:13.658 "crdt3": 0 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }, 00:46:13.658 { 00:46:13.658 "subsystem": "iscsi", 00:46:13.658 "config": [ 00:46:13.658 { 00:46:13.658 "method": "iscsi_set_options", 00:46:13.658 "params": { 00:46:13.658 "node_base": "iqn.2016-06.io.spdk", 00:46:13.658 "max_sessions": 128, 00:46:13.658 "max_connections_per_session": 2, 00:46:13.658 "max_queue_depth": 64, 00:46:13.658 "default_time2wait": 2, 00:46:13.658 "default_time2retain": 20, 00:46:13.658 "first_burst_length": 8192, 00:46:13.658 "immediate_data": true, 00:46:13.658 "allow_duplicated_isid": false, 00:46:13.658 "error_recovery_level": 0, 00:46:13.658 "nop_timeout": 60, 00:46:13.658 "nop_in_interval": 30, 00:46:13.658 "disable_chap": false, 00:46:13.658 "require_chap": false, 00:46:13.658 "mutual_chap": false, 00:46:13.658 "chap_group": 0, 00:46:13.658 "max_large_datain_per_connection": 64, 00:46:13.658 "max_r2t_per_connection": 4, 00:46:13.658 "pdu_pool_size": 36864, 00:46:13.658 "immediate_data_pool_size": 16384, 00:46:13.658 "data_out_pool_size": 2048 00:46:13.658 } 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 } 00:46:13.658 ] 00:46:13.658 }' 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75224 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75224 ']' 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75224 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75224 00:46:13.658 killing process with pid 75224 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75224' 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75224 00:46:13.658 17:43:14 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75224 00:46:15.563 [2024-11-26 17:43:15.816191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:46:15.563 [2024-11-26 17:43:15.851618] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:15.563 [2024-11-26 17:43:15.851744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:46:15.563 [2024-11-26 17:43:15.859536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:15.563 [2024-11-26 17:43:15.859598] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:46:15.563 [2024-11-26 17:43:15.859616] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:46:15.563 [2024-11-26 17:43:15.859645] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:46:15.563 [2024-11-26 17:43:15.859830] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75301 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75301 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75301 ']' 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:17.468 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:17.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:17.469 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:17.469 17:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:46:17.469 17:43:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:17.469 17:43:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:46:17.469 "subsystems": [ 00:46:17.469 { 00:46:17.469 "subsystem": "fsdev", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "fsdev_set_opts", 00:46:17.469 "params": { 00:46:17.469 "fsdev_io_pool_size": 65535, 00:46:17.469 "fsdev_io_cache_size": 256 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "keyring", 00:46:17.469 "config": [] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "iobuf", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "iobuf_set_options", 00:46:17.469 "params": { 00:46:17.469 "small_pool_count": 8192, 00:46:17.469 "large_pool_count": 1024, 00:46:17.469 "small_bufsize": 8192, 00:46:17.469 "large_bufsize": 135168, 00:46:17.469 "enable_numa": false 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "sock", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "sock_set_default_impl", 00:46:17.469 "params": { 00:46:17.469 "impl_name": "posix" 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "sock_impl_set_options", 00:46:17.469 "params": { 00:46:17.469 "impl_name": "ssl", 00:46:17.469 "recv_buf_size": 4096, 00:46:17.469 "send_buf_size": 4096, 00:46:17.469 "enable_recv_pipe": true, 00:46:17.469 "enable_quickack": false, 00:46:17.469 "enable_placement_id": 0, 00:46:17.469 "enable_zerocopy_send_server": true, 00:46:17.469 "enable_zerocopy_send_client": false, 00:46:17.469 "zerocopy_threshold": 0, 00:46:17.469 "tls_version": 0, 00:46:17.469 "enable_ktls": false 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "sock_impl_set_options", 00:46:17.469 "params": { 00:46:17.469 "impl_name": "posix", 00:46:17.469 "recv_buf_size": 2097152, 00:46:17.469 "send_buf_size": 2097152, 00:46:17.469 "enable_recv_pipe": true, 00:46:17.469 "enable_quickack": false, 00:46:17.469 "enable_placement_id": 0, 00:46:17.469 "enable_zerocopy_send_server": true, 00:46:17.469 "enable_zerocopy_send_client": false, 00:46:17.469 "zerocopy_threshold": 0, 00:46:17.469 "tls_version": 0, 00:46:17.469 "enable_ktls": false 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "vmd", 00:46:17.469 "config": [] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "accel", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "accel_set_options", 00:46:17.469 "params": { 00:46:17.469 "small_cache_size": 128, 00:46:17.469 "large_cache_size": 16, 00:46:17.469 "task_count": 2048, 00:46:17.469 "sequence_count": 2048, 00:46:17.469 "buf_count": 2048 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "bdev", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "bdev_set_options", 00:46:17.469 "params": { 00:46:17.469 "bdev_io_pool_size": 65535, 00:46:17.469 "bdev_io_cache_size": 256, 00:46:17.469 "bdev_auto_examine": true, 00:46:17.469 "iobuf_small_cache_size": 128, 00:46:17.469 "iobuf_large_cache_size": 16 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_raid_set_options", 00:46:17.469 "params": { 00:46:17.469 "process_window_size_kb": 1024, 00:46:17.469 "process_max_bandwidth_mb_sec": 0 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_iscsi_set_options", 00:46:17.469 "params": { 00:46:17.469 "timeout_sec": 30 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_nvme_set_options", 00:46:17.469 "params": { 00:46:17.469 "action_on_timeout": "none", 00:46:17.469 "timeout_us": 0, 00:46:17.469 "timeout_admin_us": 0, 00:46:17.469 "keep_alive_timeout_ms": 10000, 00:46:17.469 "arbitration_burst": 0, 00:46:17.469 "low_priority_weight": 0, 00:46:17.469 "medium_priority_weight": 0, 00:46:17.469 "high_priority_weight": 0, 00:46:17.469 "nvme_adminq_poll_period_us": 10000, 00:46:17.469 "nvme_ioq_poll_period_us": 0, 00:46:17.469 "io_queue_requests": 0, 00:46:17.469 "delay_cmd_submit": true, 00:46:17.469 "transport_retry_count": 4, 00:46:17.469 "bdev_retry_count": 3, 00:46:17.469 "transport_ack_timeout": 0, 00:46:17.469 "ctrlr_loss_timeout_sec": 0, 00:46:17.469 "reconnect_delay_sec": 0, 00:46:17.469 "fast_io_fail_timeout_sec": 0, 00:46:17.469 "disable_auto_failback": false, 00:46:17.469 "generate_uuids": false, 00:46:17.469 "transport_tos": 0, 00:46:17.469 "nvme_error_stat": false, 00:46:17.469 "rdma_srq_size": 0, 00:46:17.469 "io_path_stat": false, 00:46:17.469 "allow_accel_sequence": false, 00:46:17.469 "rdma_max_cq_size": 0, 00:46:17.469 "rdma_cm_event_timeout_ms": 0, 00:46:17.469 "dhchap_digests": [ 00:46:17.469 "sha256", 00:46:17.469 "sha384", 00:46:17.469 "sha512" 00:46:17.469 ], 00:46:17.469 "dhchap_dhgroups": [ 00:46:17.469 "null", 00:46:17.469 "ffdhe2048", 00:46:17.469 "ffdhe3072", 00:46:17.469 "ffdhe4096", 00:46:17.469 "ffdhe6144", 00:46:17.469 "ffdhe8192" 00:46:17.469 ] 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_nvme_set_hotplug", 00:46:17.469 "params": { 00:46:17.469 "period_us": 100000, 00:46:17.469 "enable": false 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_malloc_create", 00:46:17.469 "params": { 00:46:17.469 "name": "malloc0", 00:46:17.469 "num_blocks": 8192, 00:46:17.469 "block_size": 4096, 00:46:17.469 "physical_block_size": 4096, 00:46:17.469 "uuid": "7c7327a1-5c4a-4573-9f6c-2f31ac2072db", 00:46:17.469 "optimal_io_boundary": 0, 00:46:17.469 "md_size": 0, 00:46:17.469 "dif_type": 0, 00:46:17.469 "dif_is_head_of_md": false, 00:46:17.469 "dif_pi_format": 0 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "bdev_wait_for_examine" 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "scsi", 00:46:17.469 "config": null 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "scheduler", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "framework_set_scheduler", 00:46:17.469 "params": { 00:46:17.469 "name": "static" 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "vhost_scsi", 00:46:17.469 "config": [] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "vhost_blk", 00:46:17.469 "config": [] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "ublk", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "ublk_create_target", 00:46:17.469 "params": { 00:46:17.469 "cpumask": "1" 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "ublk_start_disk", 00:46:17.469 "params": { 00:46:17.469 "bdev_name": "malloc0", 00:46:17.469 "ublk_id": 0, 00:46:17.469 "num_queues": 1, 00:46:17.469 "queue_depth": 128 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "nbd", 00:46:17.469 "config": [] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "nvmf", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "nvmf_set_config", 00:46:17.469 "params": { 00:46:17.469 "discovery_filter": "match_any", 00:46:17.469 "admin_cmd_passthru": { 00:46:17.469 "identify_ctrlr": false 00:46:17.469 }, 00:46:17.469 "dhchap_digests": [ 00:46:17.469 "sha256", 00:46:17.469 "sha384", 00:46:17.469 "sha512" 00:46:17.469 ], 00:46:17.469 "dhchap_dhgroups": [ 00:46:17.469 "null", 00:46:17.469 "ffdhe2048", 00:46:17.469 "ffdhe3072", 00:46:17.469 "ffdhe4096", 00:46:17.469 "ffdhe6144", 00:46:17.469 "ffdhe8192" 00:46:17.469 ] 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "nvmf_set_max_subsystems", 00:46:17.469 "params": { 00:46:17.469 "max_subsystems": 1024 00:46:17.469 } 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "method": "nvmf_set_crdt", 00:46:17.469 "params": { 00:46:17.469 "crdt1": 0, 00:46:17.469 "crdt2": 0, 00:46:17.469 "crdt3": 0 00:46:17.469 } 00:46:17.469 } 00:46:17.469 ] 00:46:17.469 }, 00:46:17.469 { 00:46:17.469 "subsystem": "iscsi", 00:46:17.469 "config": [ 00:46:17.469 { 00:46:17.469 "method": "iscsi_set_options", 00:46:17.469 "params": { 00:46:17.469 "node_base": "iqn.2016-06.io.spdk", 00:46:17.470 "max_sessions": 128, 00:46:17.470 "max_connections_per_session": 2, 00:46:17.470 "max_queue_depth": 64, 00:46:17.470 "default_time2wait": 2, 00:46:17.470 "default_time2retain": 20, 00:46:17.470 "first_burst_length": 8192, 00:46:17.470 "immediate_data": true, 00:46:17.470 "allow_duplicated_isid": false, 00:46:17.470 "error_recovery_level": 0, 00:46:17.470 "nop_timeout": 60, 00:46:17.470 "nop_in_interval": 30, 00:46:17.470 "disable_chap": false, 00:46:17.470 "require_chap": false, 00:46:17.470 "mutual_chap": false, 00:46:17.470 "chap_group": 0, 00:46:17.470 "max_large_datain_per_connection": 64, 00:46:17.470 "max_r2t_per_connection": 4, 00:46:17.470 "pdu_pool_size": 36864, 00:46:17.470 "immediate_data_pool_size": 16384, 00:46:17.470 "data_out_pool_size": 2048 00:46:17.470 } 00:46:17.470 } 00:46:17.470 ] 00:46:17.470 } 00:46:17.470 ] 00:46:17.470 }' 00:46:17.470 [2024-11-26 17:43:18.053587] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:17.470 [2024-11-26 17:43:18.053920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75301 ] 00:46:17.729 [2024-11-26 17:43:18.240251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:17.729 [2024-11-26 17:43:18.377560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:19.107 [2024-11-26 17:43:19.593515] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:19.107 [2024-11-26 17:43:19.594875] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:19.107 [2024-11-26 17:43:19.601658] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:46:19.107 [2024-11-26 17:43:19.601752] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:46:19.107 [2024-11-26 17:43:19.601766] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:46:19.107 [2024-11-26 17:43:19.601775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:46:19.107 [2024-11-26 17:43:19.610619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:19.107 [2024-11-26 17:43:19.610646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:19.107 [2024-11-26 17:43:19.617527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:19.107 [2024-11-26 17:43:19.617626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:46:19.108 [2024-11-26 17:43:19.634521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75301 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75301 ']' 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75301 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75301 00:46:19.108 killing process with pid 75301 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75301' 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75301 00:46:19.108 17:43:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75301 00:46:21.014 [2024-11-26 17:43:21.492119] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:46:21.014 [2024-11-26 17:43:21.520542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:21.014 [2024-11-26 17:43:21.520682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:46:21.014 [2024-11-26 17:43:21.529536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:21.014 [2024-11-26 17:43:21.529596] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:46:21.014 [2024-11-26 17:43:21.529606] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:46:21.014 [2024-11-26 17:43:21.529636] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:46:21.014 [2024-11-26 17:43:21.529802] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:46:22.937 17:43:23 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:46:22.937 00:46:22.937 real 0m11.397s 00:46:22.937 user 0m8.641s 00:46:22.937 sys 0m3.518s 00:46:22.937 17:43:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:22.937 ************************************ 00:46:22.937 END TEST test_save_ublk_config 00:46:22.937 ************************************ 00:46:22.937 17:43:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:46:23.198 17:43:23 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75394 00:46:23.198 17:43:23 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:46:23.198 17:43:23 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:23.198 17:43:23 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75394 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@835 -- # '[' -z 75394 ']' 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:23.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:23.198 17:43:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:46:23.198 [2024-11-26 17:43:23.769953] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:23.198 [2024-11-26 17:43:23.770091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75394 ] 00:46:23.457 [2024-11-26 17:43:23.956465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:23.457 [2024-11-26 17:43:24.105828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:23.457 [2024-11-26 17:43:24.105856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:24.835 17:43:25 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:24.835 17:43:25 ublk -- common/autotest_common.sh@868 -- # return 0 00:46:24.836 17:43:25 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:46:24.836 17:43:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:24.836 17:43:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:24.836 17:43:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:46:24.836 ************************************ 00:46:24.836 START TEST test_create_ublk 00:46:24.836 ************************************ 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:46:24.836 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:24.836 [2024-11-26 17:43:25.181521] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:24.836 [2024-11-26 17:43:25.184757] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:24.836 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:46:24.836 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:24.836 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:25.095 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:25.096 [2024-11-26 17:43:25.541699] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:46:25.096 [2024-11-26 17:43:25.542220] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:46:25.096 [2024-11-26 17:43:25.542242] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:46:25.096 [2024-11-26 17:43:25.542252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:46:25.096 [2024-11-26 17:43:25.550960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:25.096 [2024-11-26 17:43:25.550987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:25.096 [2024-11-26 17:43:25.557547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:25.096 [2024-11-26 17:43:25.558185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:46:25.096 [2024-11-26 17:43:25.580557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:25.096 17:43:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:46:25.096 { 00:46:25.096 "ublk_device": "/dev/ublkb0", 00:46:25.096 "id": 0, 00:46:25.096 "queue_depth": 512, 00:46:25.096 "num_queues": 4, 00:46:25.096 "bdev_name": "Malloc0" 00:46:25.096 } 00:46:25.096 ]' 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:46:25.096 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:46:25.355 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:46:25.355 17:43:25 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:46:25.355 17:43:25 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:46:25.355 fio: verification read phase will never start because write phase uses all of runtime 00:46:25.355 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:46:25.355 fio-3.35 00:46:25.355 Starting 1 process 00:46:35.367 00:46:35.367 fio_test: (groupid=0, jobs=1): err= 0: pid=75446: Tue Nov 26 17:43:36 2024 00:46:35.367 write: IOPS=13.8k, BW=53.9MiB/s (56.6MB/s)(540MiB/10002msec); 0 zone resets 00:46:35.367 clat (usec): min=45, max=3965, avg=71.57, stdev=113.89 00:46:35.367 lat (usec): min=45, max=3991, avg=72.02, stdev=113.91 00:46:35.367 clat percentiles (usec): 00:46:35.367 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 62], 00:46:35.367 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:46:35.367 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 76], 00:46:35.367 | 99.00th=[ 83], 99.50th=[ 86], 99.90th=[ 2474], 99.95th=[ 3064], 00:46:35.367 | 99.99th=[ 3687] 00:46:35.367 bw ( KiB/s): min=54216, max=62744, per=100.00%, avg=55255.58, stdev=1896.17, samples=19 00:46:35.367 iops : min=13554, max=15686, avg=13813.89, stdev=474.04, samples=19 00:46:35.367 lat (usec) : 50=2.80%, 100=96.93%, 250=0.02%, 500=0.01%, 750=0.02% 00:46:35.367 lat (usec) : 1000=0.02% 00:46:35.367 lat (msec) : 2=0.09%, 4=0.13% 00:46:35.367 cpu : usr=2.45%, sys=10.63%, ctx=138122, majf=0, minf=796 00:46:35.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:35.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:35.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:35.367 issued rwts: total=0,138121,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:35.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:35.367 00:46:35.368 Run status group 0 (all jobs): 00:46:35.368 WRITE: bw=53.9MiB/s (56.6MB/s), 53.9MiB/s-53.9MiB/s (56.6MB/s-56.6MB/s), io=540MiB (566MB), run=10002-10002msec 00:46:35.368 00:46:35.368 Disk stats (read/write): 00:46:35.368 ublkb0: ios=0/136680, merge=0/0, ticks=0/8510, in_queue=8511, util=99.13% 00:46:35.627 17:43:36 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:35.627 [2024-11-26 17:43:36.076683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:46:35.627 [2024-11-26 17:43:36.109946] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:35.627 [2024-11-26 17:43:36.110819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:46:35.627 [2024-11-26 17:43:36.126551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:35.627 [2024-11-26 17:43:36.126848] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:46:35.627 [2024-11-26 17:43:36.126864] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.627 17:43:36 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:35.627 [2024-11-26 17:43:36.142604] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:46:35.627 request: 00:46:35.627 { 00:46:35.627 "ublk_id": 0, 00:46:35.627 "method": "ublk_stop_disk", 00:46:35.627 "req_id": 1 00:46:35.627 } 00:46:35.627 Got JSON-RPC error response 00:46:35.627 response: 00:46:35.627 { 00:46:35.627 "code": -19, 00:46:35.627 "message": "No such device" 00:46:35.627 } 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:35.627 17:43:36 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:35.627 [2024-11-26 17:43:36.166612] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:46:35.627 [2024-11-26 17:43:36.174515] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:46:35.627 [2024-11-26 17:43:36.174557] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:35.627 17:43:36 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:35.627 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.566 17:43:36 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 17:43:36 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:46:36.566 17:43:36 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:46:36.566 ************************************ 00:46:36.566 END TEST test_create_ublk 00:46:36.566 ************************************ 00:46:36.566 17:43:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:46:36.566 00:46:36.566 real 0m11.853s 00:46:36.566 user 0m0.633s 00:46:36.566 sys 0m1.199s 00:46:36.566 17:43:37 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:36.566 17:43:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 17:43:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:46:36.566 17:43:37 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:36.566 17:43:37 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:36.566 17:43:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 ************************************ 00:46:36.566 START TEST test_create_multi_ublk 00:46:36.566 ************************************ 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.566 [2024-11-26 17:43:37.118509] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:36.566 [2024-11-26 17:43:37.121102] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.566 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:46:36.567 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:46:36.567 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:36.567 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:46:36.567 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.567 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:36.827 [2024-11-26 17:43:37.414667] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:46:36.827 [2024-11-26 17:43:37.415124] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:46:36.827 [2024-11-26 17:43:37.415141] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:46:36.827 [2024-11-26 17:43:37.415155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:46:36.827 [2024-11-26 17:43:37.430532] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:36.827 [2024-11-26 17:43:37.430563] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:36.827 [2024-11-26 17:43:37.438518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:36.827 [2024-11-26 17:43:37.439093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:46:36.827 [2024-11-26 17:43:37.445351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:36.827 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.087 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.087 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:46:37.087 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:46:37.087 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.087 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.087 [2024-11-26 17:43:37.768662] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:46:37.087 [2024-11-26 17:43:37.769092] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:46:37.087 [2024-11-26 17:43:37.769112] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:46:37.087 [2024-11-26 17:43:37.769120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:46:37.087 [2024-11-26 17:43:37.776550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:37.087 [2024-11-26 17:43:37.776573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:37.347 [2024-11-26 17:43:37.784536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:37.347 [2024-11-26 17:43:37.785096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:46:37.347 [2024-11-26 17:43:37.793548] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.347 17:43:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.606 [2024-11-26 17:43:38.120643] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:46:37.606 [2024-11-26 17:43:38.121089] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:46:37.606 [2024-11-26 17:43:38.121107] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:46:37.606 [2024-11-26 17:43:38.121118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:46:37.606 [2024-11-26 17:43:38.128551] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:37.606 [2024-11-26 17:43:38.128581] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:37.606 [2024-11-26 17:43:38.136534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:37.606 [2024-11-26 17:43:38.137101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:46:37.606 [2024-11-26 17:43:38.144611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.606 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.866 [2024-11-26 17:43:38.464677] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:46:37.866 [2024-11-26 17:43:38.465104] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:46:37.866 [2024-11-26 17:43:38.465124] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:46:37.866 [2024-11-26 17:43:38.465132] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:46:37.866 [2024-11-26 17:43:38.473779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:37.866 [2024-11-26 17:43:38.473804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:37.866 [2024-11-26 17:43:38.480527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:37.866 [2024-11-26 17:43:38.481121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:46:37.866 [2024-11-26 17:43:38.489567] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:46:37.866 { 00:46:37.866 "ublk_device": "/dev/ublkb0", 00:46:37.866 "id": 0, 00:46:37.866 "queue_depth": 512, 00:46:37.866 "num_queues": 4, 00:46:37.866 "bdev_name": "Malloc0" 00:46:37.866 }, 00:46:37.866 { 00:46:37.866 "ublk_device": "/dev/ublkb1", 00:46:37.866 "id": 1, 00:46:37.866 "queue_depth": 512, 00:46:37.866 "num_queues": 4, 00:46:37.866 "bdev_name": "Malloc1" 00:46:37.866 }, 00:46:37.866 { 00:46:37.866 "ublk_device": "/dev/ublkb2", 00:46:37.866 "id": 2, 00:46:37.866 "queue_depth": 512, 00:46:37.866 "num_queues": 4, 00:46:37.866 "bdev_name": "Malloc2" 00:46:37.866 }, 00:46:37.866 { 00:46:37.866 "ublk_device": "/dev/ublkb3", 00:46:37.866 "id": 3, 00:46:37.866 "queue_depth": 512, 00:46:37.866 "num_queues": 4, 00:46:37.866 "bdev_name": "Malloc3" 00:46:37.866 } 00:46:37.866 ]' 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:37.866 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:46:38.127 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:46:38.387 17:43:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:46:38.387 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:46:38.387 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:46:38.387 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:46:38.387 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.646 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:38.646 [2024-11-26 17:43:39.336625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:46:38.905 [2024-11-26 17:43:39.381568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:38.905 [2024-11-26 17:43:39.382429] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:46:38.905 [2024-11-26 17:43:39.388524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:38.905 [2024-11-26 17:43:39.388814] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:46:38.905 [2024-11-26 17:43:39.388830] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:38.905 [2024-11-26 17:43:39.396608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:46:38.905 [2024-11-26 17:43:39.437558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:38.905 [2024-11-26 17:43:39.438380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:46:38.905 [2024-11-26 17:43:39.444526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:38.905 [2024-11-26 17:43:39.444810] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:46:38.905 [2024-11-26 17:43:39.444824] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:38.905 [2024-11-26 17:43:39.449663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:46:38.905 [2024-11-26 17:43:39.484953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:38.905 [2024-11-26 17:43:39.485936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:46:38.905 [2024-11-26 17:43:39.496538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:38.905 [2024-11-26 17:43:39.496801] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:46:38.905 [2024-11-26 17:43:39.496814] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:38.905 [2024-11-26 17:43:39.512613] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:46:38.905 [2024-11-26 17:43:39.545943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:46:38.905 [2024-11-26 17:43:39.546816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:46:38.905 [2024-11-26 17:43:39.555543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:46:38.905 [2024-11-26 17:43:39.555795] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:46:38.905 [2024-11-26 17:43:39.555807] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:38.905 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:46:39.164 [2024-11-26 17:43:39.770591] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:46:39.164 [2024-11-26 17:43:39.779513] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:46:39.164 [2024-11-26 17:43:39.779550] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:46:39.164 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:46:39.164 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:39.164 17:43:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:46:39.164 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:39.164 17:43:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:40.103 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.103 17:43:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:40.103 17:43:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:46:40.103 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.103 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:40.363 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.363 17:43:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:40.363 17:43:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:46:40.363 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.363 17:43:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:40.622 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:40.622 17:43:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:46:40.622 17:43:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:46:40.622 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:40.622 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:46:41.192 ************************************ 00:46:41.192 END TEST test_create_multi_ublk 00:46:41.192 ************************************ 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:46:41.192 00:46:41.192 real 0m4.668s 00:46:41.192 user 0m0.957s 00:46:41.192 sys 0m0.250s 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:41.192 17:43:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:46:41.192 17:43:41 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:46:41.192 17:43:41 ublk -- ublk/ublk.sh@147 -- # cleanup 00:46:41.192 17:43:41 ublk -- ublk/ublk.sh@130 -- # killprocess 75394 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@954 -- # '[' -z 75394 ']' 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@958 -- # kill -0 75394 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@959 -- # uname 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75394 00:46:41.192 killing process with pid 75394 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75394' 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@973 -- # kill 75394 00:46:41.192 17:43:41 ublk -- common/autotest_common.sh@978 -- # wait 75394 00:46:42.572 [2024-11-26 17:43:43.016319] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:46:42.572 [2024-11-26 17:43:43.016620] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:46:43.969 00:46:43.969 real 0m32.410s 00:46:43.969 user 0m45.161s 00:46:43.969 sys 0m11.270s 00:46:43.969 17:43:44 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:43.969 17:43:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:46:43.969 ************************************ 00:46:43.969 END TEST ublk 00:46:43.969 ************************************ 00:46:43.969 17:43:44 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:46:43.969 17:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:43.969 17:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:43.969 17:43:44 -- common/autotest_common.sh@10 -- # set +x 00:46:43.969 ************************************ 00:46:43.969 START TEST ublk_recovery 00:46:43.969 ************************************ 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:46:43.969 * Looking for test storage... 00:46:43.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:43.969 17:43:44 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:43.969 --rc genhtml_branch_coverage=1 00:46:43.969 --rc genhtml_function_coverage=1 00:46:43.969 --rc genhtml_legend=1 00:46:43.969 --rc geninfo_all_blocks=1 00:46:43.969 --rc geninfo_unexecuted_blocks=1 00:46:43.969 00:46:43.969 ' 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:43.969 --rc genhtml_branch_coverage=1 00:46:43.969 --rc genhtml_function_coverage=1 00:46:43.969 --rc genhtml_legend=1 00:46:43.969 --rc geninfo_all_blocks=1 00:46:43.969 --rc geninfo_unexecuted_blocks=1 00:46:43.969 00:46:43.969 ' 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:43.969 --rc genhtml_branch_coverage=1 00:46:43.969 --rc genhtml_function_coverage=1 00:46:43.969 --rc genhtml_legend=1 00:46:43.969 --rc geninfo_all_blocks=1 00:46:43.969 --rc geninfo_unexecuted_blocks=1 00:46:43.969 00:46:43.969 ' 00:46:43.969 17:43:44 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:43.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:43.969 --rc genhtml_branch_coverage=1 00:46:43.969 --rc genhtml_function_coverage=1 00:46:43.969 --rc genhtml_legend=1 00:46:43.969 --rc geninfo_all_blocks=1 00:46:43.969 --rc geninfo_unexecuted_blocks=1 00:46:43.969 00:46:43.969 ' 00:46:43.969 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:46:43.969 17:43:44 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:46:43.969 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:46:44.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:44.263 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75822 00:46:44.263 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:46:44.263 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:44.263 17:43:44 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75822 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75822 ']' 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:44.263 17:43:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:44.263 [2024-11-26 17:43:44.767840] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:44.263 [2024-11-26 17:43:44.768136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75822 ] 00:46:44.263 [2024-11-26 17:43:44.956567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:44.522 [2024-11-26 17:43:45.070482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:44.522 [2024-11-26 17:43:45.070552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:46:45.459 17:43:45 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:45.459 [2024-11-26 17:43:45.924518] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:45.459 [2024-11-26 17:43:45.927221] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.459 17:43:45 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.459 17:43:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:45.459 malloc0 00:46:45.459 17:43:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.460 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:46:45.460 17:43:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:45.460 17:43:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:45.460 [2024-11-26 17:43:46.092666] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:46:45.460 [2024-11-26 17:43:46.092795] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:46:45.460 [2024-11-26 17:43:46.092811] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:46:45.460 [2024-11-26 17:43:46.092823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:46:45.460 [2024-11-26 17:43:46.101619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:46:45.460 [2024-11-26 17:43:46.101644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:46:45.460 [2024-11-26 17:43:46.108537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:46:45.460 [2024-11-26 17:43:46.108682] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:46:45.460 [2024-11-26 17:43:46.131536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:46:45.460 1 00:46:45.460 17:43:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:45.460 17:43:46 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:46:46.839 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75863 00:46:46.839 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:46:46.839 17:43:47 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:46:46.839 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:46.839 fio-3.35 00:46:46.839 Starting 1 process 00:46:52.242 17:43:52 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75822 00:46:52.242 17:43:52 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:46:56.474 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75822 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:46:56.474 17:43:57 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:46:56.474 17:43:57 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75975 00:46:56.474 17:43:57 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:56.474 17:43:57 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75975 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75975 ']' 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:56.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:56.474 17:43:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:56.733 [2024-11-26 17:43:57.296557] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:46:56.733 [2024-11-26 17:43:57.296928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75975 ] 00:46:56.992 [2024-11-26 17:43:57.485474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:56.992 [2024-11-26 17:43:57.602399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:56.992 [2024-11-26 17:43:57.602427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:46:57.928 17:43:58 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:57.928 [2024-11-26 17:43:58.534520] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:46:57.928 [2024-11-26 17:43:58.537201] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:57.928 17:43:58 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:57.928 17:43:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:58.187 malloc0 00:46:58.187 17:43:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:58.187 17:43:58 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:46:58.187 17:43:58 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:58.187 17:43:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:46:58.187 [2024-11-26 17:43:58.694722] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:46:58.187 [2024-11-26 17:43:58.694772] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:46:58.188 [2024-11-26 17:43:58.694784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:46:58.188 [2024-11-26 17:43:58.702593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:46:58.188 [2024-11-26 17:43:58.702617] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:46:58.188 1 00:46:58.188 17:43:58 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:58.188 17:43:58 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75863 00:46:59.135 [2024-11-26 17:43:59.702597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:46:59.135 [2024-11-26 17:43:59.710565] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:46:59.135 [2024-11-26 17:43:59.710587] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:47:00.072 [2024-11-26 17:44:00.709020] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:47:00.072 [2024-11-26 17:44:00.713524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:47:00.072 [2024-11-26 17:44:00.713545] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:47:01.450 [2024-11-26 17:44:01.711971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:47:01.450 [2024-11-26 17:44:01.716564] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:47:01.450 [2024-11-26 17:44:01.716582] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:47:01.450 [2024-11-26 17:44:01.716617] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:47:01.450 [2024-11-26 17:44:01.716791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:47:23.391 [2024-11-26 17:44:22.355537] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:47:23.391 [2024-11-26 17:44:22.359536] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:47:23.391 [2024-11-26 17:44:22.369523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:47:23.391 [2024-11-26 17:44:22.369548] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:47:50.034 00:47:50.034 fio_test: (groupid=0, jobs=1): err= 0: pid=75866: Tue Nov 26 17:44:47 2024 00:47:50.034 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(2568MiB/60002msec) 00:47:50.034 slat (usec): min=2, max=371, avg= 8.38, stdev= 2.63 00:47:50.034 clat (usec): min=1509, max=30226k, avg=5877.44, stdev=302798.74 00:47:50.034 lat (usec): min=1522, max=30226k, avg=5885.83, stdev=302798.73 00:47:50.034 clat percentiles (msec): 00:47:50.034 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:47:50.034 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:47:50.034 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 5], 00:47:50.034 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:47:50.034 | 99.99th=[17113] 00:47:50.034 bw ( KiB/s): min= 5200, max=95648, per=100.00%, avg=86608.87, stdev=15304.22, samples=60 00:47:50.034 iops : min= 1300, max=23912, avg=21652.25, stdev=3826.10, samples=60 00:47:50.034 write: IOPS=10.9k, BW=42.8MiB/s (44.8MB/s)(2565MiB/60002msec); 0 zone resets 00:47:50.034 slat (usec): min=2, max=263, avg= 8.64, stdev= 2.52 00:47:50.034 clat (usec): min=1580, max=30227k, avg=5792.00, stdev=293632.35 00:47:50.034 lat (usec): min=1590, max=30227k, avg=5800.64, stdev=293632.34 00:47:50.034 clat percentiles (usec): 00:47:50.034 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2606], 00:47:50.034 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:47:50.034 | 70.00th=[ 2802], 80.00th=[ 2933], 90.00th=[ 3589], 95.00th=[ 4424], 00:47:50.034 | 99.00th=[ 6128], 99.50th=[ 6915], 99.90th=[ 8979], 99.95th=[10945], 00:47:50.034 | 99.99th=[13698] 00:47:50.034 bw ( KiB/s): min= 5240, max=96560, per=100.00%, avg=86537.22, stdev=15259.65, samples=60 00:47:50.034 iops : min= 1310, max=24140, avg=21634.33, stdev=3814.92, samples=60 00:47:50.034 lat (msec) : 2=0.17%, 4=92.69%, 10=7.08%, 20=0.05%, >=2000=0.01% 00:47:50.034 cpu : usr=6.31%, sys=18.60%, ctx=53392, majf=0, minf=13 00:47:50.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:47:50.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:50.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:47:50.034 issued rwts: total=657437,656766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:50.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:47:50.034 00:47:50.034 Run status group 0 (all jobs): 00:47:50.034 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=2568MiB (2693MB), run=60002-60002msec 00:47:50.034 WRITE: bw=42.8MiB/s (44.8MB/s), 42.8MiB/s-42.8MiB/s (44.8MB/s-44.8MB/s), io=2565MiB (2690MB), run=60002-60002msec 00:47:50.034 00:47:50.034 Disk stats (read/write): 00:47:50.034 ublkb1: ios=655231/654677, merge=0/0, ticks=3795573/3660995, in_queue=7456569, util=99.98% 00:47:50.034 17:44:47 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:47:50.034 [2024-11-26 17:44:47.428017] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:47:50.034 [2024-11-26 17:44:47.462676] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:47:50.034 [2024-11-26 17:44:47.462892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:47:50.034 [2024-11-26 17:44:47.477557] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:47:50.034 [2024-11-26 17:44:47.477754] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:47:50.034 [2024-11-26 17:44:47.477766] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:50.034 17:44:47 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:47:50.034 [2024-11-26 17:44:47.493629] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:47:50.034 [2024-11-26 17:44:47.501527] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:47:50.034 [2024-11-26 17:44:47.501567] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:50.034 17:44:47 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:47:50.034 17:44:47 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:47:50.034 17:44:47 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75975 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75975 ']' 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75975 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75975 00:47:50.034 killing process with pid 75975 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75975' 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75975 00:47:50.034 17:44:47 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75975 00:47:50.034 [2024-11-26 17:44:49.169395] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:47:50.034 [2024-11-26 17:44:49.169485] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:47:50.034 00:47:50.034 real 1m6.212s 00:47:50.034 user 1m50.147s 00:47:50.034 sys 0m26.379s 00:47:50.034 17:44:50 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:50.034 17:44:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:47:50.034 ************************************ 00:47:50.034 END TEST ublk_recovery 00:47:50.034 ************************************ 00:47:50.034 17:44:50 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:47:50.034 17:44:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:47:50.034 17:44:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:47:50.034 17:44:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:50.034 17:44:50 -- common/autotest_common.sh@10 -- # set +x 00:47:50.312 17:44:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:47:50.312 17:44:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:47:50.312 17:44:50 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:47:50.313 17:44:50 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:47:50.313 17:44:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:47:50.313 17:44:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:50.313 17:44:50 -- common/autotest_common.sh@10 -- # set +x 00:47:50.313 ************************************ 00:47:50.313 START TEST ftl 00:47:50.313 ************************************ 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:47:50.313 * Looking for test storage... 00:47:50.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:50.313 17:44:50 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:50.313 17:44:50 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:47:50.313 17:44:50 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:47:50.313 17:44:50 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:47:50.313 17:44:50 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:50.313 17:44:50 ftl -- scripts/common.sh@344 -- # case "$op" in 00:47:50.313 17:44:50 ftl -- scripts/common.sh@345 -- # : 1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:50.313 17:44:50 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:50.313 17:44:50 ftl -- scripts/common.sh@365 -- # decimal 1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@353 -- # local d=1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:50.313 17:44:50 ftl -- scripts/common.sh@355 -- # echo 1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:47:50.313 17:44:50 ftl -- scripts/common.sh@366 -- # decimal 2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@353 -- # local d=2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:50.313 17:44:50 ftl -- scripts/common.sh@355 -- # echo 2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:47:50.313 17:44:50 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:50.313 17:44:50 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:50.313 17:44:50 ftl -- scripts/common.sh@368 -- # return 0 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.313 --rc genhtml_branch_coverage=1 00:47:50.313 --rc genhtml_function_coverage=1 00:47:50.313 --rc genhtml_legend=1 00:47:50.313 --rc geninfo_all_blocks=1 00:47:50.313 --rc geninfo_unexecuted_blocks=1 00:47:50.313 00:47:50.313 ' 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.313 --rc genhtml_branch_coverage=1 00:47:50.313 --rc genhtml_function_coverage=1 00:47:50.313 --rc genhtml_legend=1 00:47:50.313 --rc geninfo_all_blocks=1 00:47:50.313 --rc geninfo_unexecuted_blocks=1 00:47:50.313 00:47:50.313 ' 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.313 --rc genhtml_branch_coverage=1 00:47:50.313 --rc genhtml_function_coverage=1 00:47:50.313 --rc genhtml_legend=1 00:47:50.313 --rc geninfo_all_blocks=1 00:47:50.313 --rc geninfo_unexecuted_blocks=1 00:47:50.313 00:47:50.313 ' 00:47:50.313 17:44:50 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:50.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:50.313 --rc genhtml_branch_coverage=1 00:47:50.313 --rc genhtml_function_coverage=1 00:47:50.313 --rc genhtml_legend=1 00:47:50.313 --rc geninfo_all_blocks=1 00:47:50.313 --rc geninfo_unexecuted_blocks=1 00:47:50.313 00:47:50.313 ' 00:47:50.313 17:44:50 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:47:50.313 17:44:50 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:47:50.313 17:44:50 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:47:50.313 17:44:50 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:47:50.313 17:44:50 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:47:50.313 17:44:50 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:47:50.313 17:44:50 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:50.313 17:44:50 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:50.313 17:44:50 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:50.313 17:44:50 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:50.313 17:44:50 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:50.313 17:44:50 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:47:50.313 17:44:50 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:47:50.313 17:44:50 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:50.313 17:44:50 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:50.313 17:44:50 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:47:50.313 17:44:50 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:50.313 17:44:50 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:50.313 17:44:50 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:50.313 17:44:50 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:50.313 17:44:50 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:47:50.314 17:44:50 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:47:50.314 17:44:50 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:50.314 17:44:50 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:47:50.314 17:44:50 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:50.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:51.143 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:51.143 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:51.143 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:51.143 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:51.143 17:44:51 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:47:51.143 17:44:51 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76781 00:47:51.143 17:44:51 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76781 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@835 -- # '[' -z 76781 ']' 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:51.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:51.143 17:44:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:51.402 [2024-11-26 17:44:51.938343] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:47:51.402 [2024-11-26 17:44:51.938710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76781 ] 00:47:51.661 [2024-11-26 17:44:52.125730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:51.661 [2024-11-26 17:44:52.234043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.232 17:44:52 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:52.232 17:44:52 ftl -- common/autotest_common.sh@868 -- # return 0 00:47:52.232 17:44:52 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:47:52.491 17:44:52 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:47:53.428 17:44:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:47:53.428 17:44:54 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:53.997 17:44:54 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:47:53.997 17:44:54 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:47:53.997 17:44:54 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@50 -- # break 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:47:54.256 17:44:54 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:47:54.516 17:44:54 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:47:54.516 17:44:54 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:47:54.516 17:44:54 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:47:54.516 17:44:54 ftl -- ftl/ftl.sh@63 -- # break 00:47:54.516 17:44:54 ftl -- ftl/ftl.sh@66 -- # killprocess 76781 00:47:54.516 17:44:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 76781 ']' 00:47:54.516 17:44:54 ftl -- common/autotest_common.sh@958 -- # kill -0 76781 00:47:54.516 17:44:54 ftl -- common/autotest_common.sh@959 -- # uname 00:47:54.516 17:44:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:54.516 17:44:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76781 00:47:54.516 killing process with pid 76781 00:47:54.516 17:44:55 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:54.516 17:44:55 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:54.516 17:44:55 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76781' 00:47:54.516 17:44:55 ftl -- common/autotest_common.sh@973 -- # kill 76781 00:47:54.516 17:44:55 ftl -- common/autotest_common.sh@978 -- # wait 76781 00:47:57.054 17:44:57 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:47:57.054 17:44:57 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:47:57.054 17:44:57 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:47:57.054 17:44:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:57.054 17:44:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:57.054 ************************************ 00:47:57.054 START TEST ftl_fio_basic 00:47:57.054 ************************************ 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:47:57.054 * Looking for test storage... 00:47:57.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:47:57.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.054 --rc genhtml_branch_coverage=1 00:47:57.054 --rc genhtml_function_coverage=1 00:47:57.054 --rc genhtml_legend=1 00:47:57.054 --rc geninfo_all_blocks=1 00:47:57.054 --rc geninfo_unexecuted_blocks=1 00:47:57.054 00:47:57.054 ' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:47:57.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.054 --rc genhtml_branch_coverage=1 00:47:57.054 --rc genhtml_function_coverage=1 00:47:57.054 --rc genhtml_legend=1 00:47:57.054 --rc geninfo_all_blocks=1 00:47:57.054 --rc geninfo_unexecuted_blocks=1 00:47:57.054 00:47:57.054 ' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:47:57.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.054 --rc genhtml_branch_coverage=1 00:47:57.054 --rc genhtml_function_coverage=1 00:47:57.054 --rc genhtml_legend=1 00:47:57.054 --rc geninfo_all_blocks=1 00:47:57.054 --rc geninfo_unexecuted_blocks=1 00:47:57.054 00:47:57.054 ' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:47:57.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:57.054 --rc genhtml_branch_coverage=1 00:47:57.054 --rc genhtml_function_coverage=1 00:47:57.054 --rc genhtml_legend=1 00:47:57.054 --rc geninfo_all_blocks=1 00:47:57.054 --rc geninfo_unexecuted_blocks=1 00:47:57.054 00:47:57.054 ' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76924 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76924 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76924 ']' 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:57.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:57.054 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:57.055 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:57.055 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:57.055 17:44:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:47:57.055 [2024-11-26 17:44:57.702163] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:47:57.055 [2024-11-26 17:44:57.702303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76924 ] 00:47:57.314 [2024-11-26 17:44:57.891046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:57.314 [2024-11-26 17:44:58.004252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:57.314 [2024-11-26 17:44:58.004353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.314 [2024-11-26 17:44:58.004382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:47:58.250 17:44:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:47:58.509 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:47:58.510 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:47:58.768 { 00:47:58.768 "name": "nvme0n1", 00:47:58.768 "aliases": [ 00:47:58.768 "c18dc433-d180-4b45-bbf1-a9aca598de0f" 00:47:58.768 ], 00:47:58.768 "product_name": "NVMe disk", 00:47:58.768 "block_size": 4096, 00:47:58.768 "num_blocks": 1310720, 00:47:58.768 "uuid": "c18dc433-d180-4b45-bbf1-a9aca598de0f", 00:47:58.768 "numa_id": -1, 00:47:58.768 "assigned_rate_limits": { 00:47:58.768 "rw_ios_per_sec": 0, 00:47:58.768 "rw_mbytes_per_sec": 0, 00:47:58.768 "r_mbytes_per_sec": 0, 00:47:58.768 "w_mbytes_per_sec": 0 00:47:58.768 }, 00:47:58.768 "claimed": false, 00:47:58.768 "zoned": false, 00:47:58.768 "supported_io_types": { 00:47:58.768 "read": true, 00:47:58.768 "write": true, 00:47:58.768 "unmap": true, 00:47:58.768 "flush": true, 00:47:58.768 "reset": true, 00:47:58.768 "nvme_admin": true, 00:47:58.768 "nvme_io": true, 00:47:58.768 "nvme_io_md": false, 00:47:58.768 "write_zeroes": true, 00:47:58.768 "zcopy": false, 00:47:58.768 "get_zone_info": false, 00:47:58.768 "zone_management": false, 00:47:58.768 "zone_append": false, 00:47:58.768 "compare": true, 00:47:58.768 "compare_and_write": false, 00:47:58.768 "abort": true, 00:47:58.768 "seek_hole": false, 00:47:58.768 "seek_data": false, 00:47:58.768 "copy": true, 00:47:58.768 "nvme_iov_md": false 00:47:58.768 }, 00:47:58.768 "driver_specific": { 00:47:58.768 "nvme": [ 00:47:58.768 { 00:47:58.768 "pci_address": "0000:00:11.0", 00:47:58.768 "trid": { 00:47:58.768 "trtype": "PCIe", 00:47:58.768 "traddr": "0000:00:11.0" 00:47:58.768 }, 00:47:58.768 "ctrlr_data": { 00:47:58.768 "cntlid": 0, 00:47:58.768 "vendor_id": "0x1b36", 00:47:58.768 "model_number": "QEMU NVMe Ctrl", 00:47:58.768 "serial_number": "12341", 00:47:58.768 "firmware_revision": "8.0.0", 00:47:58.768 "subnqn": "nqn.2019-08.org.qemu:12341", 00:47:58.768 "oacs": { 00:47:58.768 "security": 0, 00:47:58.768 "format": 1, 00:47:58.768 "firmware": 0, 00:47:58.768 "ns_manage": 1 00:47:58.768 }, 00:47:58.768 "multi_ctrlr": false, 00:47:58.768 "ana_reporting": false 00:47:58.768 }, 00:47:58.768 "vs": { 00:47:58.768 "nvme_version": "1.4" 00:47:58.768 }, 00:47:58.768 "ns_data": { 00:47:58.768 "id": 1, 00:47:58.768 "can_share": false 00:47:58.768 } 00:47:58.768 } 00:47:58.768 ], 00:47:58.768 "mp_policy": "active_passive" 00:47:58.768 } 00:47:58.768 } 00:47:58.768 ]' 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:47:58.768 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:47:59.027 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:47:59.027 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:47:59.287 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=c890adde-e079-46e7-8cb2-235dac540d51 00:47:59.287 17:44:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c890adde-e079-46e7-8cb2-235dac540d51 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:47:59.546 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:47:59.806 { 00:47:59.806 "name": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:47:59.806 "aliases": [ 00:47:59.806 "lvs/nvme0n1p0" 00:47:59.806 ], 00:47:59.806 "product_name": "Logical Volume", 00:47:59.806 "block_size": 4096, 00:47:59.806 "num_blocks": 26476544, 00:47:59.806 "uuid": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:47:59.806 "assigned_rate_limits": { 00:47:59.806 "rw_ios_per_sec": 0, 00:47:59.806 "rw_mbytes_per_sec": 0, 00:47:59.806 "r_mbytes_per_sec": 0, 00:47:59.806 "w_mbytes_per_sec": 0 00:47:59.806 }, 00:47:59.806 "claimed": false, 00:47:59.806 "zoned": false, 00:47:59.806 "supported_io_types": { 00:47:59.806 "read": true, 00:47:59.806 "write": true, 00:47:59.806 "unmap": true, 00:47:59.806 "flush": false, 00:47:59.806 "reset": true, 00:47:59.806 "nvme_admin": false, 00:47:59.806 "nvme_io": false, 00:47:59.806 "nvme_io_md": false, 00:47:59.806 "write_zeroes": true, 00:47:59.806 "zcopy": false, 00:47:59.806 "get_zone_info": false, 00:47:59.806 "zone_management": false, 00:47:59.806 "zone_append": false, 00:47:59.806 "compare": false, 00:47:59.806 "compare_and_write": false, 00:47:59.806 "abort": false, 00:47:59.806 "seek_hole": true, 00:47:59.806 "seek_data": true, 00:47:59.806 "copy": false, 00:47:59.806 "nvme_iov_md": false 00:47:59.806 }, 00:47:59.806 "driver_specific": { 00:47:59.806 "lvol": { 00:47:59.806 "lvol_store_uuid": "c890adde-e079-46e7-8cb2-235dac540d51", 00:47:59.806 "base_bdev": "nvme0n1", 00:47:59.806 "thin_provision": true, 00:47:59.806 "num_allocated_clusters": 0, 00:47:59.806 "snapshot": false, 00:47:59.806 "clone": false, 00:47:59.806 "esnap_clone": false 00:47:59.806 } 00:47:59.806 } 00:47:59.806 } 00:47:59.806 ]' 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:47:59.806 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:48:00.066 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.326 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:00.326 { 00:48:00.326 "name": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:48:00.326 "aliases": [ 00:48:00.326 "lvs/nvme0n1p0" 00:48:00.326 ], 00:48:00.326 "product_name": "Logical Volume", 00:48:00.326 "block_size": 4096, 00:48:00.326 "num_blocks": 26476544, 00:48:00.326 "uuid": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:48:00.326 "assigned_rate_limits": { 00:48:00.326 "rw_ios_per_sec": 0, 00:48:00.326 "rw_mbytes_per_sec": 0, 00:48:00.326 "r_mbytes_per_sec": 0, 00:48:00.326 "w_mbytes_per_sec": 0 00:48:00.326 }, 00:48:00.326 "claimed": false, 00:48:00.326 "zoned": false, 00:48:00.326 "supported_io_types": { 00:48:00.326 "read": true, 00:48:00.326 "write": true, 00:48:00.326 "unmap": true, 00:48:00.326 "flush": false, 00:48:00.326 "reset": true, 00:48:00.326 "nvme_admin": false, 00:48:00.326 "nvme_io": false, 00:48:00.326 "nvme_io_md": false, 00:48:00.326 "write_zeroes": true, 00:48:00.326 "zcopy": false, 00:48:00.326 "get_zone_info": false, 00:48:00.326 "zone_management": false, 00:48:00.326 "zone_append": false, 00:48:00.326 "compare": false, 00:48:00.326 "compare_and_write": false, 00:48:00.326 "abort": false, 00:48:00.326 "seek_hole": true, 00:48:00.326 "seek_data": true, 00:48:00.326 "copy": false, 00:48:00.326 "nvme_iov_md": false 00:48:00.326 }, 00:48:00.326 "driver_specific": { 00:48:00.326 "lvol": { 00:48:00.326 "lvol_store_uuid": "c890adde-e079-46e7-8cb2-235dac540d51", 00:48:00.326 "base_bdev": "nvme0n1", 00:48:00.326 "thin_provision": true, 00:48:00.326 "num_allocated_clusters": 0, 00:48:00.326 "snapshot": false, 00:48:00.326 "clone": false, 00:48:00.326 "esnap_clone": false 00:48:00.326 } 00:48:00.326 } 00:48:00.326 } 00:48:00.326 ]' 00:48:00.326 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:00.326 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:48:00.326 17:45:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:48:00.585 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:48:00.585 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 00:48:00.844 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:48:00.844 { 00:48:00.844 "name": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:48:00.844 "aliases": [ 00:48:00.844 "lvs/nvme0n1p0" 00:48:00.844 ], 00:48:00.844 "product_name": "Logical Volume", 00:48:00.844 "block_size": 4096, 00:48:00.844 "num_blocks": 26476544, 00:48:00.844 "uuid": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:48:00.844 "assigned_rate_limits": { 00:48:00.844 "rw_ios_per_sec": 0, 00:48:00.844 "rw_mbytes_per_sec": 0, 00:48:00.844 "r_mbytes_per_sec": 0, 00:48:00.844 "w_mbytes_per_sec": 0 00:48:00.844 }, 00:48:00.844 "claimed": false, 00:48:00.844 "zoned": false, 00:48:00.844 "supported_io_types": { 00:48:00.844 "read": true, 00:48:00.844 "write": true, 00:48:00.844 "unmap": true, 00:48:00.844 "flush": false, 00:48:00.844 "reset": true, 00:48:00.844 "nvme_admin": false, 00:48:00.844 "nvme_io": false, 00:48:00.844 "nvme_io_md": false, 00:48:00.844 "write_zeroes": true, 00:48:00.844 "zcopy": false, 00:48:00.844 "get_zone_info": false, 00:48:00.844 "zone_management": false, 00:48:00.844 "zone_append": false, 00:48:00.844 "compare": false, 00:48:00.844 "compare_and_write": false, 00:48:00.844 "abort": false, 00:48:00.844 "seek_hole": true, 00:48:00.844 "seek_data": true, 00:48:00.844 "copy": false, 00:48:00.844 "nvme_iov_md": false 00:48:00.844 }, 00:48:00.844 "driver_specific": { 00:48:00.844 "lvol": { 00:48:00.844 "lvol_store_uuid": "c890adde-e079-46e7-8cb2-235dac540d51", 00:48:00.844 "base_bdev": "nvme0n1", 00:48:00.844 "thin_provision": true, 00:48:00.844 "num_allocated_clusters": 0, 00:48:00.844 "snapshot": false, 00:48:00.844 "clone": false, 00:48:00.844 "esnap_clone": false 00:48:00.844 } 00:48:00.844 } 00:48:00.844 } 00:48:00.844 ]' 00:48:00.844 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:48:00.844 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:48:00.844 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:48:01.105 17:45:01 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74c8cc0d-c2a8-4423-86d9-e430ab6ef283 -c nvc0n1p0 --l2p_dram_limit 60 00:48:01.105 [2024-11-26 17:45:01.767798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.767981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:01.105 [2024-11-26 17:45:01.768009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:48:01.105 [2024-11-26 17:45:01.768021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.768139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.768153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:01.105 [2024-11-26 17:45:01.768168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:48:01.105 [2024-11-26 17:45:01.768178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.768239] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:01.105 [2024-11-26 17:45:01.769236] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:01.105 [2024-11-26 17:45:01.769267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.769278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:01.105 [2024-11-26 17:45:01.769292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:48:01.105 [2024-11-26 17:45:01.769302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.769406] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fbf79a2b-8515-4dff-9f8c-7340703b943e 00:48:01.105 [2024-11-26 17:45:01.771021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.771183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:48:01.105 [2024-11-26 17:45:01.771211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:48:01.105 [2024-11-26 17:45:01.771224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.778844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.778973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:01.105 [2024-11-26 17:45:01.779053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.486 ms 00:48:01.105 [2024-11-26 17:45:01.779102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.779278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.779443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:01.105 [2024-11-26 17:45:01.779533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:48:01.105 [2024-11-26 17:45:01.779575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.779699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.779747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:01.105 [2024-11-26 17:45:01.779979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:48:01.105 [2024-11-26 17:45:01.780023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.780113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:01.105 [2024-11-26 17:45:01.785458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.785602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:01.105 [2024-11-26 17:45:01.785686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:48:01.105 [2024-11-26 17:45:01.785722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.785810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.785890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:01.105 [2024-11-26 17:45:01.785972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:48:01.105 [2024-11-26 17:45:01.786002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.786090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:48:01.105 [2024-11-26 17:45:01.786278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:01.105 [2024-11-26 17:45:01.786350] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:01.105 [2024-11-26 17:45:01.786401] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:01.105 [2024-11-26 17:45:01.786569] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:01.105 [2024-11-26 17:45:01.786663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:01.105 [2024-11-26 17:45:01.786719] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:01.105 [2024-11-26 17:45:01.786750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:01.105 [2024-11-26 17:45:01.786823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:01.105 [2024-11-26 17:45:01.786859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:01.105 [2024-11-26 17:45:01.786899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.786929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:01.105 [2024-11-26 17:45:01.786996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:48:01.105 [2024-11-26 17:45:01.787068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.787217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.105 [2024-11-26 17:45:01.787314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:01.105 [2024-11-26 17:45:01.787397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:48:01.105 [2024-11-26 17:45:01.787429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.105 [2024-11-26 17:45:01.787608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:01.105 [2024-11-26 17:45:01.787698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:01.105 [2024-11-26 17:45:01.787772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:01.105 [2024-11-26 17:45:01.787804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.105 [2024-11-26 17:45:01.787837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:01.105 [2024-11-26 17:45:01.787868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:01.105 [2024-11-26 17:45:01.787900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:01.105 [2024-11-26 17:45:01.787931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:01.105 [2024-11-26 17:45:01.788025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:01.105 [2024-11-26 17:45:01.788062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:01.105 [2024-11-26 17:45:01.788094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:01.105 [2024-11-26 17:45:01.788125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:01.105 [2024-11-26 17:45:01.788156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:01.105 [2024-11-26 17:45:01.788186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:01.105 [2024-11-26 17:45:01.788269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:01.105 [2024-11-26 17:45:01.788304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.105 [2024-11-26 17:45:01.788342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:01.105 [2024-11-26 17:45:01.788373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:01.105 [2024-11-26 17:45:01.788405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.105 [2024-11-26 17:45:01.788435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:01.105 [2024-11-26 17:45:01.788529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:01.105 [2024-11-26 17:45:01.788566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.105 [2024-11-26 17:45:01.788600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:01.105 [2024-11-26 17:45:01.788630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:01.105 [2024-11-26 17:45:01.788662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.105 [2024-11-26 17:45:01.788739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:01.105 [2024-11-26 17:45:01.788779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:01.106 [2024-11-26 17:45:01.788811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.106 [2024-11-26 17:45:01.788843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:01.106 [2024-11-26 17:45:01.788872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:01.106 [2024-11-26 17:45:01.788905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.106 [2024-11-26 17:45:01.789011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:01.106 [2024-11-26 17:45:01.789046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:01.106 [2024-11-26 17:45:01.789095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:01.106 [2024-11-26 17:45:01.789127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:01.106 [2024-11-26 17:45:01.789201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:01.106 [2024-11-26 17:45:01.789240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:01.106 [2024-11-26 17:45:01.789271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:01.106 [2024-11-26 17:45:01.789361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:01.106 [2024-11-26 17:45:01.789427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.106 [2024-11-26 17:45:01.789460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:01.106 [2024-11-26 17:45:01.789488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:01.106 [2024-11-26 17:45:01.789538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.106 [2024-11-26 17:45:01.789568] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:01.106 [2024-11-26 17:45:01.789603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:01.106 [2024-11-26 17:45:01.789693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:01.106 [2024-11-26 17:45:01.789708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.106 [2024-11-26 17:45:01.789718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:01.106 [2024-11-26 17:45:01.789734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:01.106 [2024-11-26 17:45:01.789743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:01.106 [2024-11-26 17:45:01.789757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:01.106 [2024-11-26 17:45:01.789768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:01.106 [2024-11-26 17:45:01.789780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:01.106 [2024-11-26 17:45:01.789795] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:01.106 [2024-11-26 17:45:01.789812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.789825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:01.106 [2024-11-26 17:45:01.789840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:01.106 [2024-11-26 17:45:01.789851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:01.106 [2024-11-26 17:45:01.789865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:01.106 [2024-11-26 17:45:01.789876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:01.106 [2024-11-26 17:45:01.789891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:01.106 [2024-11-26 17:45:01.789902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:01.106 [2024-11-26 17:45:01.789918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:01.106 [2024-11-26 17:45:01.789929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:01.106 [2024-11-26 17:45:01.789944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.789956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.789971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.789982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.789996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:01.106 [2024-11-26 17:45:01.790006] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:01.106 [2024-11-26 17:45:01.790024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.790036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:01.106 [2024-11-26 17:45:01.790050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:01.106 [2024-11-26 17:45:01.790061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:01.106 [2024-11-26 17:45:01.790075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:01.106 [2024-11-26 17:45:01.790087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.106 [2024-11-26 17:45:01.790101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:01.106 [2024-11-26 17:45:01.790113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.548 ms 00:48:01.106 [2024-11-26 17:45:01.790126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.106 [2024-11-26 17:45:01.790291] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:48:01.106 [2024-11-26 17:45:01.790312] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:48:06.389 [2024-11-26 17:45:06.672591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.672863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:48:06.389 [2024-11-26 17:45:06.672887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4890.225 ms 00:48:06.389 [2024-11-26 17:45:06.672902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.710748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.710798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:06.389 [2024-11-26 17:45:06.710815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.598 ms 00:48:06.389 [2024-11-26 17:45:06.710829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.711006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.711025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:06.389 [2024-11-26 17:45:06.711037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:48:06.389 [2024-11-26 17:45:06.711052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.767546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.767598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:06.389 [2024-11-26 17:45:06.767614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.502 ms 00:48:06.389 [2024-11-26 17:45:06.767630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.767693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.767707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:06.389 [2024-11-26 17:45:06.767719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:48:06.389 [2024-11-26 17:45:06.767732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.768233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.768252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:06.389 [2024-11-26 17:45:06.768266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:48:06.389 [2024-11-26 17:45:06.768280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.768421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.768439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:06.389 [2024-11-26 17:45:06.768462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:48:06.389 [2024-11-26 17:45:06.768477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.789215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.789260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:06.389 [2024-11-26 17:45:06.789275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.719 ms 00:48:06.389 [2024-11-26 17:45:06.789288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.802223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:48:06.389 [2024-11-26 17:45:06.818728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.818775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:06.389 [2024-11-26 17:45:06.818798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.322 ms 00:48:06.389 [2024-11-26 17:45:06.818809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.917597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.917663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:48:06.389 [2024-11-26 17:45:06.917689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.872 ms 00:48:06.389 [2024-11-26 17:45:06.917700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.389 [2024-11-26 17:45:06.917935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.389 [2024-11-26 17:45:06.917961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:06.389 [2024-11-26 17:45:06.917978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:48:06.390 [2024-11-26 17:45:06.917989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.390 [2024-11-26 17:45:06.954220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.390 [2024-11-26 17:45:06.954261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:48:06.390 [2024-11-26 17:45:06.954279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.195 ms 00:48:06.390 [2024-11-26 17:45:06.954289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.390 [2024-11-26 17:45:06.989234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.390 [2024-11-26 17:45:06.989267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:48:06.390 [2024-11-26 17:45:06.989286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.936 ms 00:48:06.390 [2024-11-26 17:45:06.989295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.390 [2024-11-26 17:45:06.990087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.390 [2024-11-26 17:45:06.990112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:06.390 [2024-11-26 17:45:06.990126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:48:06.390 [2024-11-26 17:45:06.990137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.094273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.094310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:48:06.649 [2024-11-26 17:45:07.094333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.224 ms 00:48:06.649 [2024-11-26 17:45:07.094344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.131095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.131132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:48:06.649 [2024-11-26 17:45:07.131149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.675 ms 00:48:06.649 [2024-11-26 17:45:07.131160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.166603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.166639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:48:06.649 [2024-11-26 17:45:07.166656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.442 ms 00:48:06.649 [2024-11-26 17:45:07.166666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.202116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.202164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:06.649 [2024-11-26 17:45:07.202180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.438 ms 00:48:06.649 [2024-11-26 17:45:07.202190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.202258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.202269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:06.649 [2024-11-26 17:45:07.202289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:06.649 [2024-11-26 17:45:07.202299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.202440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:06.649 [2024-11-26 17:45:07.202453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:06.649 [2024-11-26 17:45:07.202466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:48:06.649 [2024-11-26 17:45:07.202476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:06.649 [2024-11-26 17:45:07.203756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5444.293 ms, result 0 00:48:06.649 { 00:48:06.649 "name": "ftl0", 00:48:06.649 "uuid": "fbf79a2b-8515-4dff-9f8c-7340703b943e" 00:48:06.649 } 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:48:06.649 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:48:06.909 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:48:07.168 [ 00:48:07.168 { 00:48:07.168 "name": "ftl0", 00:48:07.168 "aliases": [ 00:48:07.168 "fbf79a2b-8515-4dff-9f8c-7340703b943e" 00:48:07.168 ], 00:48:07.168 "product_name": "FTL disk", 00:48:07.168 "block_size": 4096, 00:48:07.168 "num_blocks": 20971520, 00:48:07.168 "uuid": "fbf79a2b-8515-4dff-9f8c-7340703b943e", 00:48:07.168 "assigned_rate_limits": { 00:48:07.168 "rw_ios_per_sec": 0, 00:48:07.168 "rw_mbytes_per_sec": 0, 00:48:07.168 "r_mbytes_per_sec": 0, 00:48:07.168 "w_mbytes_per_sec": 0 00:48:07.168 }, 00:48:07.168 "claimed": false, 00:48:07.168 "zoned": false, 00:48:07.168 "supported_io_types": { 00:48:07.168 "read": true, 00:48:07.168 "write": true, 00:48:07.168 "unmap": true, 00:48:07.168 "flush": true, 00:48:07.168 "reset": false, 00:48:07.168 "nvme_admin": false, 00:48:07.168 "nvme_io": false, 00:48:07.168 "nvme_io_md": false, 00:48:07.168 "write_zeroes": true, 00:48:07.168 "zcopy": false, 00:48:07.168 "get_zone_info": false, 00:48:07.168 "zone_management": false, 00:48:07.168 "zone_append": false, 00:48:07.168 "compare": false, 00:48:07.168 "compare_and_write": false, 00:48:07.168 "abort": false, 00:48:07.168 "seek_hole": false, 00:48:07.168 "seek_data": false, 00:48:07.168 "copy": false, 00:48:07.168 "nvme_iov_md": false 00:48:07.168 }, 00:48:07.168 "driver_specific": { 00:48:07.168 "ftl": { 00:48:07.168 "base_bdev": "74c8cc0d-c2a8-4423-86d9-e430ab6ef283", 00:48:07.168 "cache": "nvc0n1p0" 00:48:07.168 } 00:48:07.168 } 00:48:07.168 } 00:48:07.168 ] 00:48:07.168 17:45:07 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:48:07.168 17:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:48:07.168 17:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:48:07.427 17:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:48:07.427 17:45:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:48:07.428 [2024-11-26 17:45:08.057134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.057188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:07.428 [2024-11-26 17:45:08.057206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:48:07.428 [2024-11-26 17:45:08.057222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.428 [2024-11-26 17:45:08.057281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:07.428 [2024-11-26 17:45:08.061568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.061602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:07.428 [2024-11-26 17:45:08.061617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.269 ms 00:48:07.428 [2024-11-26 17:45:08.061628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.428 [2024-11-26 17:45:08.062497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.062532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:07.428 [2024-11-26 17:45:08.062547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:48:07.428 [2024-11-26 17:45:08.062559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.428 [2024-11-26 17:45:08.065114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.065146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:07.428 [2024-11-26 17:45:08.065160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:48:07.428 [2024-11-26 17:45:08.065170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.428 [2024-11-26 17:45:08.070069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.070101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:07.428 [2024-11-26 17:45:08.070116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.850 ms 00:48:07.428 [2024-11-26 17:45:08.070126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.428 [2024-11-26 17:45:08.106272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.428 [2024-11-26 17:45:08.106307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:07.428 [2024-11-26 17:45:08.106339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.075 ms 00:48:07.428 [2024-11-26 17:45:08.106348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.128994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.129172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:07.688 [2024-11-26 17:45:08.129204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.616 ms 00:48:07.688 [2024-11-26 17:45:08.129216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.129549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.129567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:07.688 [2024-11-26 17:45:08.129582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:48:07.688 [2024-11-26 17:45:08.129593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.166116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.166166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:07.688 [2024-11-26 17:45:08.166184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.533 ms 00:48:07.688 [2024-11-26 17:45:08.166194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.200889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.201060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:07.688 [2024-11-26 17:45:08.201085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.677 ms 00:48:07.688 [2024-11-26 17:45:08.201096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.236156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.236321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:07.688 [2024-11-26 17:45:08.236346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.003 ms 00:48:07.688 [2024-11-26 17:45:08.236357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.270700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.688 [2024-11-26 17:45:08.270743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:07.688 [2024-11-26 17:45:08.270759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.130 ms 00:48:07.688 [2024-11-26 17:45:08.270769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.688 [2024-11-26 17:45:08.270838] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:07.688 [2024-11-26 17:45:08.270852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.270999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:07.688 [2024-11-26 17:45:08.271445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.271990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:07.689 [2024-11-26 17:45:08.272136] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:07.689 [2024-11-26 17:45:08.272149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fbf79a2b-8515-4dff-9f8c-7340703b943e 00:48:07.689 [2024-11-26 17:45:08.272160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:48:07.689 [2024-11-26 17:45:08.272175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:48:07.689 [2024-11-26 17:45:08.272188] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:48:07.689 [2024-11-26 17:45:08.272202] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:48:07.689 [2024-11-26 17:45:08.272212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:07.689 [2024-11-26 17:45:08.272225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:07.689 [2024-11-26 17:45:08.272235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:07.689 [2024-11-26 17:45:08.272247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:07.689 [2024-11-26 17:45:08.272256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:07.689 [2024-11-26 17:45:08.272268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.689 [2024-11-26 17:45:08.272279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:07.689 [2024-11-26 17:45:08.272300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:48:07.689 [2024-11-26 17:45:08.272310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.291226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.689 [2024-11-26 17:45:08.291258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:07.689 [2024-11-26 17:45:08.291273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.862 ms 00:48:07.689 [2024-11-26 17:45:08.291284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.291847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:07.689 [2024-11-26 17:45:08.291867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:07.689 [2024-11-26 17:45:08.291880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:48:07.689 [2024-11-26 17:45:08.291890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.360323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.689 [2024-11-26 17:45:08.360358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:07.689 [2024-11-26 17:45:08.360374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.689 [2024-11-26 17:45:08.360386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.360467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.689 [2024-11-26 17:45:08.360478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:07.689 [2024-11-26 17:45:08.360492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.689 [2024-11-26 17:45:08.360518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.360667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.689 [2024-11-26 17:45:08.360685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:07.689 [2024-11-26 17:45:08.360699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.689 [2024-11-26 17:45:08.360708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.689 [2024-11-26 17:45:08.360763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.689 [2024-11-26 17:45:08.360773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:07.689 [2024-11-26 17:45:08.360786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.689 [2024-11-26 17:45:08.360796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.488856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.488910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:07.949 [2024-11-26 17:45:08.488927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.488938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.586610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.586665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:07.949 [2024-11-26 17:45:08.586682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.586692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.586828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.586841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:07.949 [2024-11-26 17:45:08.586860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.586871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.586982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.586995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:07.949 [2024-11-26 17:45:08.587009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.587019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.587169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.587184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:07.949 [2024-11-26 17:45:08.587201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.587211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.587291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.587304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:07.949 [2024-11-26 17:45:08.587319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.587330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.587412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.587424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:07.949 [2024-11-26 17:45:08.587437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.587451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.587569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:07.949 [2024-11-26 17:45:08.587583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:07.949 [2024-11-26 17:45:08.587597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:07.949 [2024-11-26 17:45:08.587607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:07.949 [2024-11-26 17:45:08.587847] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.544 ms, result 0 00:48:07.949 true 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76924 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76924 ']' 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76924 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:48:07.949 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76924 00:48:08.208 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:48:08.208 killing process with pid 76924 00:48:08.208 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:48:08.208 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76924' 00:48:08.208 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76924 00:48:08.208 17:45:08 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76924 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:13.497 17:45:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:48:13.497 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:48:13.497 fio-3.35 00:48:13.497 Starting 1 thread 00:48:18.768 00:48:18.768 test: (groupid=0, jobs=1): err= 0: pid=77153: Tue Nov 26 17:45:19 2024 00:48:18.768 read: IOPS=905, BW=60.1MiB/s (63.0MB/s)(255MiB/4234msec) 00:48:18.768 slat (usec): min=4, max=112, avg= 6.93, stdev= 3.11 00:48:18.768 clat (usec): min=361, max=942, avg=489.92, stdev=50.67 00:48:18.768 lat (usec): min=368, max=948, avg=496.85, stdev=50.97 00:48:18.768 clat percentiles (usec): 00:48:18.768 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 441], 20.00th=[ 449], 00:48:18.768 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 506], 60.00th=[ 515], 00:48:18.769 | 70.00th=[ 519], 80.00th=[ 523], 90.00th=[ 537], 95.00th=[ 570], 00:48:18.769 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 832], 99.95th=[ 930], 00:48:18.769 | 99.99th=[ 947] 00:48:18.769 write: IOPS=911, BW=60.5MiB/s (63.5MB/s)(256MiB/4230msec); 0 zone resets 00:48:18.769 slat (nsec): min=16522, max=90144, avg=25161.70, stdev=6616.94 00:48:18.769 clat (usec): min=383, max=1020, avg=564.24, stdev=63.88 00:48:18.769 lat (usec): min=414, max=1041, avg=589.40, stdev=64.29 00:48:18.769 clat percentiles (usec): 00:48:18.769 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[ 490], 20.00th=[ 529], 00:48:18.769 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 586], 00:48:18.769 | 70.00th=[ 594], 80.00th=[ 603], 90.00th=[ 611], 95.00th=[ 627], 00:48:18.769 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 971], 99.95th=[ 988], 00:48:18.769 | 99.99th=[ 1020] 00:48:18.769 bw ( KiB/s): min=59840, max=63376, per=99.96%, avg=61965.00, stdev=1209.75, samples=8 00:48:18.769 iops : min= 880, max= 932, avg=911.25, stdev=17.79, samples=8 00:48:18.769 lat (usec) : 500=28.63%, 750=70.35%, 1000=1.01% 00:48:18.769 lat (msec) : 2=0.01% 00:48:18.769 cpu : usr=99.13%, sys=0.00%, ctx=11, majf=0, minf=1169 00:48:18.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:18.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:18.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:18.769 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:18.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:48:18.769 00:48:18.769 Run status group 0 (all jobs): 00:48:18.769 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=255MiB (267MB), run=4234-4234msec 00:48:18.769 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=256MiB (269MB), run=4230-4230msec 00:48:20.674 ----------------------------------------------------- 00:48:20.674 Suppressions used: 00:48:20.674 count bytes template 00:48:20.674 1 5 /usr/src/fio/parse.c 00:48:20.674 1 8 libtcmalloc_minimal.so 00:48:20.674 1 904 libcrypto.so 00:48:20.674 ----------------------------------------------------- 00:48:20.674 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:20.674 17:45:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:48:20.933 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:48:20.933 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:48:20.933 fio-3.35 00:48:20.933 Starting 2 threads 00:48:53.020 00:48:53.020 first_half: (groupid=0, jobs=1): err= 0: pid=77260: Tue Nov 26 17:45:53 2024 00:48:53.020 read: IOPS=2166, BW=8668KiB/s (8876kB/s)(256MiB/30215msec) 00:48:53.020 slat (nsec): min=3629, max=47891, avg=8737.89, stdev=2697.54 00:48:53.020 clat (usec): min=722, max=358678, avg=49023.02, stdev=33404.64 00:48:53.020 lat (usec): min=726, max=358688, avg=49031.76, stdev=33404.80 00:48:53.020 clat percentiles (msec): 00:48:53.020 | 1.00th=[ 12], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:48:53.020 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 41], 00:48:53.020 | 70.00th=[ 42], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 107], 00:48:53.020 | 99.00th=[ 224], 99.50th=[ 236], 99.90th=[ 275], 99.95th=[ 309], 00:48:53.020 | 99.99th=[ 351] 00:48:53.020 write: IOPS=2172, BW=8689KiB/s (8897kB/s)(256MiB/30171msec); 0 zone resets 00:48:53.020 slat (usec): min=4, max=801, avg= 8.86, stdev= 6.87 00:48:53.020 clat (usec): min=365, max=55790, avg=10000.26, stdev=8985.39 00:48:53.020 lat (usec): min=371, max=55799, avg=10009.12, stdev=8985.44 00:48:53.020 clat percentiles (usec): 00:48:53.020 | 1.00th=[ 1401], 5.00th=[ 1942], 10.00th=[ 2376], 20.00th=[ 3949], 00:48:53.020 | 30.00th=[ 5800], 40.00th=[ 7373], 50.00th=[ 8455], 60.00th=[ 9372], 00:48:53.020 | 70.00th=[10290], 80.00th=[12256], 90.00th=[17433], 95.00th=[25560], 00:48:53.020 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53740], 99.95th=[54264], 00:48:53.020 | 99.99th=[54789] 00:48:53.020 bw ( KiB/s): min= 296, max=44824, per=100.00%, avg=19285.78, stdev=13222.26, samples=27 00:48:53.020 iops : min= 74, max=11206, avg=4821.44, stdev=3305.57, samples=27 00:48:53.020 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.05% 00:48:53.020 lat (msec) : 2=2.74%, 4=7.29%, 10=24.05%, 20=14.46%, 50=46.16% 00:48:53.020 lat (msec) : 100=2.59%, 250=2.51%, 500=0.12% 00:48:53.020 cpu : usr=99.23%, sys=0.16%, ctx=46, majf=0, minf=5540 00:48:53.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:48:53.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:53.020 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:53.020 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:53.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:53.020 second_half: (groupid=0, jobs=1): err= 0: pid=77261: Tue Nov 26 17:45:53 2024 00:48:53.020 read: IOPS=2185, BW=8741KiB/s (8951kB/s)(256MiB/29968msec) 00:48:53.020 slat (nsec): min=3466, max=45252, avg=8837.52, stdev=2661.66 00:48:53.020 clat (msec): min=13, max=279, avg=49.53, stdev=30.65 00:48:53.020 lat (msec): min=13, max=279, avg=49.54, stdev=30.65 00:48:53.020 clat percentiles (msec): 00:48:53.020 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:48:53.020 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:48:53.020 | 70.00th=[ 43], 80.00th=[ 47], 90.00th=[ 55], 95.00th=[ 97], 00:48:53.020 | 99.00th=[ 220], 99.50th=[ 230], 99.90th=[ 255], 99.95th=[ 262], 00:48:53.020 | 99.99th=[ 275] 00:48:53.020 write: IOPS=2198, BW=8794KiB/s (9005kB/s)(256MiB/29809msec); 0 zone resets 00:48:53.020 slat (usec): min=4, max=672, avg= 8.88, stdev= 7.33 00:48:53.020 clat (usec): min=492, max=48480, avg=9010.10, stdev=4892.04 00:48:53.020 lat (usec): min=509, max=48489, avg=9018.97, stdev=4892.15 00:48:53.020 clat percentiles (usec): 00:48:53.020 | 1.00th=[ 1680], 5.00th=[ 2769], 10.00th=[ 3589], 20.00th=[ 5211], 00:48:53.020 | 30.00th=[ 6521], 40.00th=[ 7570], 50.00th=[ 8455], 60.00th=[ 9241], 00:48:53.020 | 70.00th=[ 9896], 80.00th=[11600], 90.00th=[15926], 95.00th=[17695], 00:48:53.020 | 99.00th=[23987], 99.50th=[33817], 99.90th=[41681], 99.95th=[45351], 00:48:53.020 | 99.99th=[46924] 00:48:53.020 bw ( KiB/s): min= 1368, max=41856, per=100.00%, avg=22698.04, stdev=13223.33, samples=23 00:48:53.020 iops : min= 342, max=10464, avg=5674.48, stdev=3305.80, samples=23 00:48:53.020 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.05% 00:48:53.020 lat (msec) : 2=0.87%, 4=5.31%, 10=29.10%, 20=13.88%, 50=44.87% 00:48:53.020 lat (msec) : 100=3.53%, 250=2.30%, 500=0.08% 00:48:53.021 cpu : usr=99.27%, sys=0.16%, ctx=42, majf=0, minf=5577 00:48:53.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:48:53.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:53.021 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:53.021 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:53.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:53.021 00:48:53.021 Run status group 0 (all jobs): 00:48:53.021 READ: bw=16.9MiB/s (17.8MB/s), 8668KiB/s-8741KiB/s (8876kB/s-8951kB/s), io=512MiB (536MB), run=29968-30215msec 00:48:53.021 WRITE: bw=17.0MiB/s (17.8MB/s), 8689KiB/s-8794KiB/s (8897kB/s-9005kB/s), io=512MiB (537MB), run=29809-30171msec 00:48:55.555 ----------------------------------------------------- 00:48:55.555 Suppressions used: 00:48:55.555 count bytes template 00:48:55.555 2 10 /usr/src/fio/parse.c 00:48:55.555 3 288 /usr/src/fio/iolog.c 00:48:55.555 1 8 libtcmalloc_minimal.so 00:48:55.555 1 904 libcrypto.so 00:48:55.555 ----------------------------------------------------- 00:48:55.555 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:48:55.555 17:45:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:48:55.555 17:45:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:55.555 17:45:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:55.555 17:45:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:48:55.555 17:45:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:55.555 17:45:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:48:55.555 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:48:55.555 fio-3.35 00:48:55.555 Starting 1 thread 00:49:13.673 00:49:13.673 test: (groupid=0, jobs=1): err= 0: pid=77647: Tue Nov 26 17:46:11 2024 00:49:13.673 read: IOPS=7101, BW=27.7MiB/s (29.1MB/s)(255MiB/9182msec) 00:49:13.673 slat (usec): min=3, max=424, avg= 9.14, stdev= 5.24 00:49:13.673 clat (usec): min=806, max=34810, avg=18010.92, stdev=992.41 00:49:13.673 lat (usec): min=810, max=34814, avg=18020.06, stdev=992.28 00:49:13.673 clat percentiles (usec): 00:49:13.673 | 1.00th=[16909], 5.00th=[17171], 10.00th=[17433], 20.00th=[17433], 00:49:13.673 | 30.00th=[17695], 40.00th=[17695], 50.00th=[17957], 60.00th=[17957], 00:49:13.673 | 70.00th=[18220], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:49:13.673 | 99.00th=[21627], 99.50th=[21890], 99.90th=[28705], 99.95th=[30540], 00:49:13.673 | 99.99th=[33817] 00:49:13.673 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(256MiB/5086msec); 0 zone resets 00:49:13.673 slat (usec): min=4, max=2923, avg= 7.95, stdev=14.23 00:49:13.673 clat (usec): min=477, max=63758, avg=9882.67, stdev=12165.13 00:49:13.673 lat (usec): min=484, max=63765, avg=9890.62, stdev=12165.28 00:49:13.673 clat percentiles (usec): 00:49:13.673 | 1.00th=[ 947], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1500], 00:49:13.673 | 30.00th=[ 1696], 40.00th=[ 2114], 50.00th=[ 6259], 60.00th=[ 7439], 00:49:13.673 | 70.00th=[ 8717], 80.00th=[11076], 90.00th=[35914], 95.00th=[37487], 00:49:13.673 | 99.00th=[41681], 99.50th=[43254], 99.90th=[57410], 99.95th=[60556], 00:49:13.673 | 99.99th=[63177] 00:49:13.673 bw ( KiB/s): min= 5744, max=70384, per=92.47%, avg=47662.55, stdev=16559.87, samples=11 00:49:13.673 iops : min= 1436, max=17596, avg=11915.64, stdev=4139.97, samples=11 00:49:13.673 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.72% 00:49:13.673 lat (msec) : 2=18.79%, 4=1.55%, 10=17.36%, 20=52.35%, 50=9.06% 00:49:13.673 lat (msec) : 100=0.12% 00:49:13.673 cpu : usr=97.82%, sys=0.83%, ctx=59, majf=0, minf=5565 00:49:13.673 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:49:13.673 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:13.673 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:49:13.673 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:13.673 latency : target=0, window=0, percentile=100.00%, depth=128 00:49:13.673 00:49:13.673 Run status group 0 (all jobs): 00:49:13.673 READ: bw=27.7MiB/s (29.1MB/s), 27.7MiB/s-27.7MiB/s (29.1MB/s-29.1MB/s), io=255MiB (267MB), run=9182-9182msec 00:49:13.673 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=256MiB (268MB), run=5086-5086msec 00:49:13.673 ----------------------------------------------------- 00:49:13.673 Suppressions used: 00:49:13.673 count bytes template 00:49:13.673 1 5 /usr/src/fio/parse.c 00:49:13.673 2 192 /usr/src/fio/iolog.c 00:49:13.673 1 8 libtcmalloc_minimal.so 00:49:13.673 1 904 libcrypto.so 00:49:13.673 ----------------------------------------------------- 00:49:13.673 00:49:13.673 17:46:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:49:13.673 17:46:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:49:13.673 17:46:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:49:13.673 Remove shared memory files 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57798 /dev/shm/spdk_tgt_trace.pid75822 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:49:13.673 00:49:13.673 real 1m16.750s 00:49:13.673 user 2m50.795s 00:49:13.673 sys 0m4.066s 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:13.673 17:46:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:49:13.673 ************************************ 00:49:13.673 END TEST ftl_fio_basic 00:49:13.673 ************************************ 00:49:13.673 17:46:14 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:49:13.673 17:46:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:49:13.673 17:46:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:13.673 17:46:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:49:13.673 ************************************ 00:49:13.673 START TEST ftl_bdevperf 00:49:13.673 ************************************ 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:49:13.673 * Looking for test storage... 00:49:13.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:49:13.673 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:13.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:13.674 --rc genhtml_branch_coverage=1 00:49:13.674 --rc genhtml_function_coverage=1 00:49:13.674 --rc genhtml_legend=1 00:49:13.674 --rc geninfo_all_blocks=1 00:49:13.674 --rc geninfo_unexecuted_blocks=1 00:49:13.674 00:49:13.674 ' 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:13.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:13.674 --rc genhtml_branch_coverage=1 00:49:13.674 --rc genhtml_function_coverage=1 00:49:13.674 --rc genhtml_legend=1 00:49:13.674 --rc geninfo_all_blocks=1 00:49:13.674 --rc geninfo_unexecuted_blocks=1 00:49:13.674 00:49:13.674 ' 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:13.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:13.674 --rc genhtml_branch_coverage=1 00:49:13.674 --rc genhtml_function_coverage=1 00:49:13.674 --rc genhtml_legend=1 00:49:13.674 --rc geninfo_all_blocks=1 00:49:13.674 --rc geninfo_unexecuted_blocks=1 00:49:13.674 00:49:13.674 ' 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:13.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:13.674 --rc genhtml_branch_coverage=1 00:49:13.674 --rc genhtml_function_coverage=1 00:49:13.674 --rc genhtml_legend=1 00:49:13.674 --rc geninfo_all_blocks=1 00:49:13.674 --rc geninfo_unexecuted_blocks=1 00:49:13.674 00:49:13.674 ' 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:49:13.674 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77899 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77899 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77899 ']' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:13.934 17:46:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:13.934 [2024-11-26 17:46:14.478629] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:49:13.934 [2024-11-26 17:46:14.479259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77899 ] 00:49:14.193 [2024-11-26 17:46:14.657078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:14.193 [2024-11-26 17:46:14.765834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:49:14.763 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:49:15.023 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:15.283 { 00:49:15.283 "name": "nvme0n1", 00:49:15.283 "aliases": [ 00:49:15.283 "63bfe1ea-211d-47d6-be89-597e53c11edc" 00:49:15.283 ], 00:49:15.283 "product_name": "NVMe disk", 00:49:15.283 "block_size": 4096, 00:49:15.283 "num_blocks": 1310720, 00:49:15.283 "uuid": "63bfe1ea-211d-47d6-be89-597e53c11edc", 00:49:15.283 "numa_id": -1, 00:49:15.283 "assigned_rate_limits": { 00:49:15.283 "rw_ios_per_sec": 0, 00:49:15.283 "rw_mbytes_per_sec": 0, 00:49:15.283 "r_mbytes_per_sec": 0, 00:49:15.283 "w_mbytes_per_sec": 0 00:49:15.283 }, 00:49:15.283 "claimed": true, 00:49:15.283 "claim_type": "read_many_write_one", 00:49:15.283 "zoned": false, 00:49:15.283 "supported_io_types": { 00:49:15.283 "read": true, 00:49:15.283 "write": true, 00:49:15.283 "unmap": true, 00:49:15.283 "flush": true, 00:49:15.283 "reset": true, 00:49:15.283 "nvme_admin": true, 00:49:15.283 "nvme_io": true, 00:49:15.283 "nvme_io_md": false, 00:49:15.283 "write_zeroes": true, 00:49:15.283 "zcopy": false, 00:49:15.283 "get_zone_info": false, 00:49:15.283 "zone_management": false, 00:49:15.283 "zone_append": false, 00:49:15.283 "compare": true, 00:49:15.283 "compare_and_write": false, 00:49:15.283 "abort": true, 00:49:15.283 "seek_hole": false, 00:49:15.283 "seek_data": false, 00:49:15.283 "copy": true, 00:49:15.283 "nvme_iov_md": false 00:49:15.283 }, 00:49:15.283 "driver_specific": { 00:49:15.283 "nvme": [ 00:49:15.283 { 00:49:15.283 "pci_address": "0000:00:11.0", 00:49:15.283 "trid": { 00:49:15.283 "trtype": "PCIe", 00:49:15.283 "traddr": "0000:00:11.0" 00:49:15.283 }, 00:49:15.283 "ctrlr_data": { 00:49:15.283 "cntlid": 0, 00:49:15.283 "vendor_id": "0x1b36", 00:49:15.283 "model_number": "QEMU NVMe Ctrl", 00:49:15.283 "serial_number": "12341", 00:49:15.283 "firmware_revision": "8.0.0", 00:49:15.283 "subnqn": "nqn.2019-08.org.qemu:12341", 00:49:15.283 "oacs": { 00:49:15.283 "security": 0, 00:49:15.283 "format": 1, 00:49:15.283 "firmware": 0, 00:49:15.283 "ns_manage": 1 00:49:15.283 }, 00:49:15.283 "multi_ctrlr": false, 00:49:15.283 "ana_reporting": false 00:49:15.283 }, 00:49:15.283 "vs": { 00:49:15.283 "nvme_version": "1.4" 00:49:15.283 }, 00:49:15.283 "ns_data": { 00:49:15.283 "id": 1, 00:49:15.283 "can_share": false 00:49:15.283 } 00:49:15.283 } 00:49:15.283 ], 00:49:15.283 "mp_policy": "active_passive" 00:49:15.283 } 00:49:15.283 } 00:49:15.283 ]' 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:49:15.283 17:46:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:49:15.543 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=c890adde-e079-46e7-8cb2-235dac540d51 00:49:15.543 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:49:15.543 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c890adde-e079-46e7-8cb2-235dac540d51 00:49:15.802 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ee070088-d705-4023-b157-c2545c59ce22 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ee070088-d705-4023-b157-c2545c59ce22 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:49:16.061 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:16.320 { 00:49:16.320 "name": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:16.320 "aliases": [ 00:49:16.320 "lvs/nvme0n1p0" 00:49:16.320 ], 00:49:16.320 "product_name": "Logical Volume", 00:49:16.320 "block_size": 4096, 00:49:16.320 "num_blocks": 26476544, 00:49:16.320 "uuid": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:16.320 "assigned_rate_limits": { 00:49:16.320 "rw_ios_per_sec": 0, 00:49:16.320 "rw_mbytes_per_sec": 0, 00:49:16.320 "r_mbytes_per_sec": 0, 00:49:16.320 "w_mbytes_per_sec": 0 00:49:16.320 }, 00:49:16.320 "claimed": false, 00:49:16.320 "zoned": false, 00:49:16.320 "supported_io_types": { 00:49:16.320 "read": true, 00:49:16.320 "write": true, 00:49:16.320 "unmap": true, 00:49:16.320 "flush": false, 00:49:16.320 "reset": true, 00:49:16.320 "nvme_admin": false, 00:49:16.320 "nvme_io": false, 00:49:16.320 "nvme_io_md": false, 00:49:16.320 "write_zeroes": true, 00:49:16.320 "zcopy": false, 00:49:16.320 "get_zone_info": false, 00:49:16.320 "zone_management": false, 00:49:16.320 "zone_append": false, 00:49:16.320 "compare": false, 00:49:16.320 "compare_and_write": false, 00:49:16.320 "abort": false, 00:49:16.320 "seek_hole": true, 00:49:16.320 "seek_data": true, 00:49:16.320 "copy": false, 00:49:16.320 "nvme_iov_md": false 00:49:16.320 }, 00:49:16.320 "driver_specific": { 00:49:16.320 "lvol": { 00:49:16.320 "lvol_store_uuid": "ee070088-d705-4023-b157-c2545c59ce22", 00:49:16.320 "base_bdev": "nvme0n1", 00:49:16.320 "thin_provision": true, 00:49:16.320 "num_allocated_clusters": 0, 00:49:16.320 "snapshot": false, 00:49:16.320 "clone": false, 00:49:16.320 "esnap_clone": false 00:49:16.320 } 00:49:16.320 } 00:49:16.320 } 00:49:16.320 ]' 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:49:16.320 17:46:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:49:16.578 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d88592f-186e-4152-94bc-6df388dd3976 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:16.836 { 00:49:16.836 "name": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:16.836 "aliases": [ 00:49:16.836 "lvs/nvme0n1p0" 00:49:16.836 ], 00:49:16.836 "product_name": "Logical Volume", 00:49:16.836 "block_size": 4096, 00:49:16.836 "num_blocks": 26476544, 00:49:16.836 "uuid": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:16.836 "assigned_rate_limits": { 00:49:16.836 "rw_ios_per_sec": 0, 00:49:16.836 "rw_mbytes_per_sec": 0, 00:49:16.836 "r_mbytes_per_sec": 0, 00:49:16.836 "w_mbytes_per_sec": 0 00:49:16.836 }, 00:49:16.836 "claimed": false, 00:49:16.836 "zoned": false, 00:49:16.836 "supported_io_types": { 00:49:16.836 "read": true, 00:49:16.836 "write": true, 00:49:16.836 "unmap": true, 00:49:16.836 "flush": false, 00:49:16.836 "reset": true, 00:49:16.836 "nvme_admin": false, 00:49:16.836 "nvme_io": false, 00:49:16.836 "nvme_io_md": false, 00:49:16.836 "write_zeroes": true, 00:49:16.836 "zcopy": false, 00:49:16.836 "get_zone_info": false, 00:49:16.836 "zone_management": false, 00:49:16.836 "zone_append": false, 00:49:16.836 "compare": false, 00:49:16.836 "compare_and_write": false, 00:49:16.836 "abort": false, 00:49:16.836 "seek_hole": true, 00:49:16.836 "seek_data": true, 00:49:16.836 "copy": false, 00:49:16.836 "nvme_iov_md": false 00:49:16.836 }, 00:49:16.836 "driver_specific": { 00:49:16.836 "lvol": { 00:49:16.836 "lvol_store_uuid": "ee070088-d705-4023-b157-c2545c59ce22", 00:49:16.836 "base_bdev": "nvme0n1", 00:49:16.836 "thin_provision": true, 00:49:16.836 "num_allocated_clusters": 0, 00:49:16.836 "snapshot": false, 00:49:16.836 "clone": false, 00:49:16.836 "esnap_clone": false 00:49:16.836 } 00:49:16.836 } 00:49:16.836 } 00:49:16.836 ]' 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:49:16.836 17:46:17 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 9d88592f-186e-4152-94bc-6df388dd3976 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9d88592f-186e-4152-94bc-6df388dd3976 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:49:17.095 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d88592f-186e-4152-94bc-6df388dd3976 00:49:17.354 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:17.354 { 00:49:17.355 "name": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:17.355 "aliases": [ 00:49:17.355 "lvs/nvme0n1p0" 00:49:17.355 ], 00:49:17.355 "product_name": "Logical Volume", 00:49:17.355 "block_size": 4096, 00:49:17.355 "num_blocks": 26476544, 00:49:17.355 "uuid": "9d88592f-186e-4152-94bc-6df388dd3976", 00:49:17.355 "assigned_rate_limits": { 00:49:17.355 "rw_ios_per_sec": 0, 00:49:17.355 "rw_mbytes_per_sec": 0, 00:49:17.355 "r_mbytes_per_sec": 0, 00:49:17.355 "w_mbytes_per_sec": 0 00:49:17.355 }, 00:49:17.355 "claimed": false, 00:49:17.355 "zoned": false, 00:49:17.355 "supported_io_types": { 00:49:17.355 "read": true, 00:49:17.355 "write": true, 00:49:17.355 "unmap": true, 00:49:17.355 "flush": false, 00:49:17.355 "reset": true, 00:49:17.355 "nvme_admin": false, 00:49:17.355 "nvme_io": false, 00:49:17.355 "nvme_io_md": false, 00:49:17.355 "write_zeroes": true, 00:49:17.355 "zcopy": false, 00:49:17.355 "get_zone_info": false, 00:49:17.355 "zone_management": false, 00:49:17.355 "zone_append": false, 00:49:17.355 "compare": false, 00:49:17.355 "compare_and_write": false, 00:49:17.355 "abort": false, 00:49:17.355 "seek_hole": true, 00:49:17.355 "seek_data": true, 00:49:17.355 "copy": false, 00:49:17.355 "nvme_iov_md": false 00:49:17.355 }, 00:49:17.355 "driver_specific": { 00:49:17.355 "lvol": { 00:49:17.355 "lvol_store_uuid": "ee070088-d705-4023-b157-c2545c59ce22", 00:49:17.355 "base_bdev": "nvme0n1", 00:49:17.355 "thin_provision": true, 00:49:17.355 "num_allocated_clusters": 0, 00:49:17.355 "snapshot": false, 00:49:17.355 "clone": false, 00:49:17.355 "esnap_clone": false 00:49:17.355 } 00:49:17.355 } 00:49:17.355 } 00:49:17.355 ]' 00:49:17.355 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:17.355 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:49:17.355 17:46:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:17.355 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:17.355 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:17.355 17:46:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:49:17.355 17:46:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:49:17.355 17:46:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9d88592f-186e-4152-94bc-6df388dd3976 -c nvc0n1p0 --l2p_dram_limit 20 00:49:17.615 [2024-11-26 17:46:18.198069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.198124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:17.615 [2024-11-26 17:46:18.198139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:17.615 [2024-11-26 17:46:18.198169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.198239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.198255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:17.615 [2024-11-26 17:46:18.198266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:49:17.615 [2024-11-26 17:46:18.198278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.198297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:17.615 [2024-11-26 17:46:18.199284] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:17.615 [2024-11-26 17:46:18.199313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.199327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:17.615 [2024-11-26 17:46:18.199339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:49:17.615 [2024-11-26 17:46:18.199351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.199439] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 513f290a-9b14-461b-b973-3e3bace39398 00:49:17.615 [2024-11-26 17:46:18.200909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.200939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:49:17.615 [2024-11-26 17:46:18.200958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:49:17.615 [2024-11-26 17:46:18.200968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.208610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.208640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:17.615 [2024-11-26 17:46:18.208657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.611 ms 00:49:17.615 [2024-11-26 17:46:18.208684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.208782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.208795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:17.615 [2024-11-26 17:46:18.208812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:49:17.615 [2024-11-26 17:46:18.208823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.208888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.208900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:17.615 [2024-11-26 17:46:18.208913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:49:17.615 [2024-11-26 17:46:18.208923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.208951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:17.615 [2024-11-26 17:46:18.214269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.615 [2024-11-26 17:46:18.214306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:17.615 [2024-11-26 17:46:18.214321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.337 ms 00:49:17.615 [2024-11-26 17:46:18.214333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.615 [2024-11-26 17:46:18.214363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.616 [2024-11-26 17:46:18.214377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:17.616 [2024-11-26 17:46:18.214387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:49:17.616 [2024-11-26 17:46:18.214399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.616 [2024-11-26 17:46:18.214432] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:49:17.616 [2024-11-26 17:46:18.214611] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:17.616 [2024-11-26 17:46:18.214628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:17.616 [2024-11-26 17:46:18.214644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:17.616 [2024-11-26 17:46:18.214657] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:17.616 [2024-11-26 17:46:18.214672] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:17.616 [2024-11-26 17:46:18.214682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:49:17.616 [2024-11-26 17:46:18.214695] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:17.616 [2024-11-26 17:46:18.214705] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:17.616 [2024-11-26 17:46:18.214720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:17.616 [2024-11-26 17:46:18.214731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.616 [2024-11-26 17:46:18.214746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:17.616 [2024-11-26 17:46:18.214756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:49:17.616 [2024-11-26 17:46:18.214768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.616 [2024-11-26 17:46:18.214840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.616 [2024-11-26 17:46:18.214853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:17.616 [2024-11-26 17:46:18.214863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:49:17.616 [2024-11-26 17:46:18.214877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.616 [2024-11-26 17:46:18.214958] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:17.616 [2024-11-26 17:46:18.214976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:17.616 [2024-11-26 17:46:18.214987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:17.616 [2024-11-26 17:46:18.215022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:17.616 [2024-11-26 17:46:18.215052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:17.616 [2024-11-26 17:46:18.215073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:17.616 [2024-11-26 17:46:18.215095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:49:17.616 [2024-11-26 17:46:18.215104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:17.616 [2024-11-26 17:46:18.215118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:17.616 [2024-11-26 17:46:18.215127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:49:17.616 [2024-11-26 17:46:18.215143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:17.616 [2024-11-26 17:46:18.215164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:17.616 [2024-11-26 17:46:18.215195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:17.616 [2024-11-26 17:46:18.215228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:17.616 [2024-11-26 17:46:18.215259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:17.616 [2024-11-26 17:46:18.215291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:17.616 [2024-11-26 17:46:18.215323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:17.616 [2024-11-26 17:46:18.215344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:17.616 [2024-11-26 17:46:18.215356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:49:17.616 [2024-11-26 17:46:18.215365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:17.616 [2024-11-26 17:46:18.215376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:17.616 [2024-11-26 17:46:18.215394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:49:17.616 [2024-11-26 17:46:18.215406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:17.616 [2024-11-26 17:46:18.215427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:49:17.616 [2024-11-26 17:46:18.215436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215449] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:17.616 [2024-11-26 17:46:18.215459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:17.616 [2024-11-26 17:46:18.215473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:17.616 [2024-11-26 17:46:18.215508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:17.616 [2024-11-26 17:46:18.215518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:17.616 [2024-11-26 17:46:18.215530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:17.616 [2024-11-26 17:46:18.215539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:17.616 [2024-11-26 17:46:18.215551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:17.616 [2024-11-26 17:46:18.215561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:17.616 [2024-11-26 17:46:18.215577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:17.616 [2024-11-26 17:46:18.215589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:49:17.616 [2024-11-26 17:46:18.215614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:49:17.616 [2024-11-26 17:46:18.215627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:49:17.616 [2024-11-26 17:46:18.215638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:49:17.616 [2024-11-26 17:46:18.215651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:49:17.616 [2024-11-26 17:46:18.215661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:49:17.616 [2024-11-26 17:46:18.215674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:49:17.616 [2024-11-26 17:46:18.215684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:49:17.616 [2024-11-26 17:46:18.215699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:49:17.616 [2024-11-26 17:46:18.215710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:49:17.616 [2024-11-26 17:46:18.215769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:17.616 [2024-11-26 17:46:18.215783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:17.616 [2024-11-26 17:46:18.215807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:17.616 [2024-11-26 17:46:18.215819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:17.617 [2024-11-26 17:46:18.215830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:17.617 [2024-11-26 17:46:18.215843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:17.617 [2024-11-26 17:46:18.215853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:17.617 [2024-11-26 17:46:18.215869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:49:17.617 [2024-11-26 17:46:18.215879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:17.617 [2024-11-26 17:46:18.215919] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:49:17.617 [2024-11-26 17:46:18.215932] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:49:21.812 [2024-11-26 17:46:22.263012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.263095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:49:21.812 [2024-11-26 17:46:22.263115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4053.662 ms 00:49:21.812 [2024-11-26 17:46:22.263126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.302194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.302247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:21.812 [2024-11-26 17:46:22.302266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.694 ms 00:49:21.812 [2024-11-26 17:46:22.302277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.302430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.302443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:21.812 [2024-11-26 17:46:22.302460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:49:21.812 [2024-11-26 17:46:22.302470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.360059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.360112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:21.812 [2024-11-26 17:46:22.360130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.620 ms 00:49:21.812 [2024-11-26 17:46:22.360141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.360199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.360210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:21.812 [2024-11-26 17:46:22.360226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:49:21.812 [2024-11-26 17:46:22.360236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.360731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.360752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:21.812 [2024-11-26 17:46:22.360766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:49:21.812 [2024-11-26 17:46:22.360776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.360905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.360920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:21.812 [2024-11-26 17:46:22.360936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:49:21.812 [2024-11-26 17:46:22.360949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.380446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.380489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:21.812 [2024-11-26 17:46:22.380523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.506 ms 00:49:21.812 [2024-11-26 17:46:22.380543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:21.812 [2024-11-26 17:46:22.393025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:49:21.812 [2024-11-26 17:46:22.399047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:21.812 [2024-11-26 17:46:22.399087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:21.812 [2024-11-26 17:46:22.399101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.432 ms 00:49:21.812 [2024-11-26 17:46:22.399114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.506055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.506116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:49:22.072 [2024-11-26 17:46:22.506133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.085 ms 00:49:22.072 [2024-11-26 17:46:22.506146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.506350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.506372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:22.072 [2024-11-26 17:46:22.506387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:49:22.072 [2024-11-26 17:46:22.506399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.543013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.543063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:49:22.072 [2024-11-26 17:46:22.543078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.618 ms 00:49:22.072 [2024-11-26 17:46:22.543092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.579198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.579247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:49:22.072 [2024-11-26 17:46:22.579263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.124 ms 00:49:22.072 [2024-11-26 17:46:22.579275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.580012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.580044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:22.072 [2024-11-26 17:46:22.580056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:49:22.072 [2024-11-26 17:46:22.580068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.686576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.686633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:49:22.072 [2024-11-26 17:46:22.686649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.624 ms 00:49:22.072 [2024-11-26 17:46:22.686662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.724934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.724993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:49:22.072 [2024-11-26 17:46:22.725009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.228 ms 00:49:22.072 [2024-11-26 17:46:22.725038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.072 [2024-11-26 17:46:22.761504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.072 [2024-11-26 17:46:22.761553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:49:22.072 [2024-11-26 17:46:22.761567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.484 ms 00:49:22.072 [2024-11-26 17:46:22.761579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.331 [2024-11-26 17:46:22.798060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.331 [2024-11-26 17:46:22.798105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:22.331 [2024-11-26 17:46:22.798135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.501 ms 00:49:22.331 [2024-11-26 17:46:22.798148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.331 [2024-11-26 17:46:22.798189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.331 [2024-11-26 17:46:22.798207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:22.331 [2024-11-26 17:46:22.798218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:49:22.331 [2024-11-26 17:46:22.798230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.331 [2024-11-26 17:46:22.798346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:22.331 [2024-11-26 17:46:22.798362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:22.331 [2024-11-26 17:46:22.798372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:49:22.331 [2024-11-26 17:46:22.798388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:22.331 [2024-11-26 17:46:22.799402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4608.375 ms, result 0 00:49:22.331 { 00:49:22.331 "name": "ftl0", 00:49:22.331 "uuid": "513f290a-9b14-461b-b973-3e3bace39398" 00:49:22.331 } 00:49:22.331 17:46:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:49:22.331 17:46:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:49:22.331 17:46:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:49:22.590 17:46:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:49:22.590 [2024-11-26 17:46:23.131389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:49:22.590 I/O size of 69632 is greater than zero copy threshold (65536). 00:49:22.590 Zero copy mechanism will not be used. 00:49:22.590 Running I/O for 4 seconds... 00:49:24.492 1435.00 IOPS, 95.29 MiB/s [2024-11-26T17:46:26.568Z] 1491.50 IOPS, 99.04 MiB/s [2024-11-26T17:46:27.138Z] 1517.33 IOPS, 100.76 MiB/s [2024-11-26T17:46:27.138Z] 1542.50 IOPS, 102.43 MiB/s 00:49:26.444 Latency(us) 00:49:26.444 [2024-11-26T17:46:27.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:26.444 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:49:26.444 ftl0 : 4.00 1542.19 102.41 0.00 0.00 680.27 223.72 2000.30 00:49:26.444 [2024-11-26T17:46:27.138Z] =================================================================================================================== 00:49:26.444 [2024-11-26T17:46:27.138Z] Total : 1542.19 102.41 0.00 0.00 680.27 223.72 2000.30 00:49:26.444 [2024-11-26 17:46:27.135576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:49:26.444 { 00:49:26.444 "results": [ 00:49:26.444 { 00:49:26.444 "job": "ftl0", 00:49:26.444 "core_mask": "0x1", 00:49:26.444 "workload": "randwrite", 00:49:26.444 "status": "finished", 00:49:26.444 "queue_depth": 1, 00:49:26.444 "io_size": 69632, 00:49:26.444 "runtime": 4.001447, 00:49:26.444 "iops": 1542.1921120034826, 00:49:26.444 "mibps": 102.41119493773127, 00:49:26.444 "io_failed": 0, 00:49:26.444 "io_timeout": 0, 00:49:26.444 "avg_latency_us": 680.272502487669, 00:49:26.444 "min_latency_us": 223.71726907630523, 00:49:26.444 "max_latency_us": 2000.2955823293173 00:49:26.444 } 00:49:26.444 ], 00:49:26.444 "core_count": 1 00:49:26.444 } 00:49:26.704 17:46:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:49:26.704 [2024-11-26 17:46:27.252218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:49:26.704 Running I/O for 4 seconds... 00:49:28.581 11835.00 IOPS, 46.23 MiB/s [2024-11-26T17:46:30.653Z] 11760.00 IOPS, 45.94 MiB/s [2024-11-26T17:46:31.590Z] 11444.67 IOPS, 44.71 MiB/s [2024-11-26T17:46:31.590Z] 11302.25 IOPS, 44.15 MiB/s 00:49:30.896 Latency(us) 00:49:30.896 [2024-11-26T17:46:31.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:30.896 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:49:30.896 ftl0 : 4.01 11291.55 44.11 0.00 0.00 11314.06 223.72 24635.22 00:49:30.896 [2024-11-26T17:46:31.590Z] =================================================================================================================== 00:49:30.896 [2024-11-26T17:46:31.590Z] Total : 11291.55 44.11 0.00 0.00 11314.06 0.00 24635.22 00:49:30.896 [2024-11-26 17:46:31.270626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:49:30.896 { 00:49:30.896 "results": [ 00:49:30.896 { 00:49:30.896 "job": "ftl0", 00:49:30.896 "core_mask": "0x1", 00:49:30.896 "workload": "randwrite", 00:49:30.896 "status": "finished", 00:49:30.896 "queue_depth": 128, 00:49:30.896 "io_size": 4096, 00:49:30.896 "runtime": 4.014948, 00:49:30.896 "iops": 11291.553464702407, 00:49:30.896 "mibps": 44.10763072149378, 00:49:30.896 "io_failed": 0, 00:49:30.896 "io_timeout": 0, 00:49:30.896 "avg_latency_us": 11314.055630449448, 00:49:30.896 "min_latency_us": 223.71726907630523, 00:49:30.896 "max_latency_us": 24635.219277108434 00:49:30.896 } 00:49:30.896 ], 00:49:30.896 "core_count": 1 00:49:30.896 } 00:49:30.896 17:46:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:49:30.896 [2024-11-26 17:46:31.401960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:49:30.896 Running I/O for 4 seconds... 00:49:32.773 7448.00 IOPS, 29.09 MiB/s [2024-11-26T17:46:34.847Z] 7520.50 IOPS, 29.38 MiB/s [2024-11-26T17:46:35.415Z] 7568.33 IOPS, 29.56 MiB/s [2024-11-26T17:46:35.415Z] 7575.50 IOPS, 29.59 MiB/s 00:49:34.721 Latency(us) 00:49:34.721 [2024-11-26T17:46:35.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:34.721 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:49:34.721 Verification LBA range: start 0x0 length 0x1400000 00:49:34.721 ftl0 : 4.01 7588.49 29.64 0.00 0.00 16819.15 296.10 20424.07 00:49:34.721 [2024-11-26T17:46:35.415Z] =================================================================================================================== 00:49:34.721 [2024-11-26T17:46:35.415Z] Total : 7588.49 29.64 0.00 0.00 16819.15 0.00 20424.07 00:49:34.980 [2024-11-26 17:46:35.424355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:49:34.980 { 00:49:34.980 "results": [ 00:49:34.980 { 00:49:34.980 "job": "ftl0", 00:49:34.980 "core_mask": "0x1", 00:49:34.980 "workload": "verify", 00:49:34.980 "status": "finished", 00:49:34.980 "verify_range": { 00:49:34.980 "start": 0, 00:49:34.980 "length": 20971520 00:49:34.980 }, 00:49:34.980 "queue_depth": 128, 00:49:34.980 "io_size": 4096, 00:49:34.980 "runtime": 4.010018, 00:49:34.980 "iops": 7588.494615236141, 00:49:34.980 "mibps": 29.642557090766175, 00:49:34.980 "io_failed": 0, 00:49:34.980 "io_timeout": 0, 00:49:34.980 "avg_latency_us": 16819.146813847572, 00:49:34.980 "min_latency_us": 296.09638554216866, 00:49:34.980 "max_latency_us": 20424.070682730922 00:49:34.980 } 00:49:34.980 ], 00:49:34.980 "core_count": 1 00:49:34.980 } 00:49:34.980 17:46:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:49:34.980 [2024-11-26 17:46:35.639319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:34.980 [2024-11-26 17:46:35.639372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:34.980 [2024-11-26 17:46:35.639397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:34.980 [2024-11-26 17:46:35.639410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:34.980 [2024-11-26 17:46:35.639435] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:34.980 [2024-11-26 17:46:35.643643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:34.980 [2024-11-26 17:46:35.643694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:34.980 [2024-11-26 17:46:35.643711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:49:34.980 [2024-11-26 17:46:35.643721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:34.980 [2024-11-26 17:46:35.645610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:34.980 [2024-11-26 17:46:35.645773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:34.980 [2024-11-26 17:46:35.645804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:49:34.980 [2024-11-26 17:46:35.645816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.239 [2024-11-26 17:46:35.849972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.239 [2024-11-26 17:46:35.850046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:35.239 [2024-11-26 17:46:35.850070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.447 ms 00:49:35.239 [2024-11-26 17:46:35.850082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.239 [2024-11-26 17:46:35.855279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.239 [2024-11-26 17:46:35.855331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:35.239 [2024-11-26 17:46:35.855352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:49:35.239 [2024-11-26 17:46:35.855362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.239 [2024-11-26 17:46:35.890733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.239 [2024-11-26 17:46:35.890770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:35.239 [2024-11-26 17:46:35.890786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.350 ms 00:49:35.239 [2024-11-26 17:46:35.890797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.239 [2024-11-26 17:46:35.912765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.239 [2024-11-26 17:46:35.912818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:35.239 [2024-11-26 17:46:35.912836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.962 ms 00:49:35.239 [2024-11-26 17:46:35.912846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.239 [2024-11-26 17:46:35.912981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.239 [2024-11-26 17:46:35.912994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:35.239 [2024-11-26 17:46:35.913013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:49:35.239 [2024-11-26 17:46:35.913022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.511 [2024-11-26 17:46:35.949267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.511 [2024-11-26 17:46:35.949302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:35.511 [2024-11-26 17:46:35.949318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.284 ms 00:49:35.511 [2024-11-26 17:46:35.949327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.511 [2024-11-26 17:46:35.985431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.511 [2024-11-26 17:46:35.985466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:35.511 [2024-11-26 17:46:35.985481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.121 ms 00:49:35.511 [2024-11-26 17:46:35.985492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.511 [2024-11-26 17:46:36.020545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.511 [2024-11-26 17:46:36.020580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:35.511 [2024-11-26 17:46:36.020595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.055 ms 00:49:35.511 [2024-11-26 17:46:36.020604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.511 [2024-11-26 17:46:36.054837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.511 [2024-11-26 17:46:36.054871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:35.511 [2024-11-26 17:46:36.054896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.201 ms 00:49:35.511 [2024-11-26 17:46:36.054905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.511 [2024-11-26 17:46:36.054960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:35.511 [2024-11-26 17:46:36.054976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.054990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.055996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:35.511 [2024-11-26 17:46:36.056006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:35.512 [2024-11-26 17:46:36.056244] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:35.512 [2024-11-26 17:46:36.056261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 513f290a-9b14-461b-b973-3e3bace39398 00:49:35.512 [2024-11-26 17:46:36.056272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:49:35.512 [2024-11-26 17:46:36.056284] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:49:35.512 [2024-11-26 17:46:36.056294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:49:35.512 [2024-11-26 17:46:36.056306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:49:35.512 [2024-11-26 17:46:36.056317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:35.512 [2024-11-26 17:46:36.056343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:35.512 [2024-11-26 17:46:36.056353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:35.512 [2024-11-26 17:46:36.056367] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:35.512 [2024-11-26 17:46:36.056376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:35.512 [2024-11-26 17:46:36.056388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.512 [2024-11-26 17:46:36.056399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:35.512 [2024-11-26 17:46:36.056412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:49:35.512 [2024-11-26 17:46:36.056423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.076129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.512 [2024-11-26 17:46:36.076164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:35.512 [2024-11-26 17:46:36.076180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.681 ms 00:49:35.512 [2024-11-26 17:46:36.076191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.076759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:35.512 [2024-11-26 17:46:36.076777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:35.512 [2024-11-26 17:46:36.076791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:49:35.512 [2024-11-26 17:46:36.076804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.131278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.512 [2024-11-26 17:46:36.131318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:35.512 [2024-11-26 17:46:36.131337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.512 [2024-11-26 17:46:36.131347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.131412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.512 [2024-11-26 17:46:36.131423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:35.512 [2024-11-26 17:46:36.131436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.512 [2024-11-26 17:46:36.131449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.131586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.512 [2024-11-26 17:46:36.131602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:35.512 [2024-11-26 17:46:36.131632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.512 [2024-11-26 17:46:36.131642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.512 [2024-11-26 17:46:36.131664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.512 [2024-11-26 17:46:36.131675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:35.512 [2024-11-26 17:46:36.131689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.512 [2024-11-26 17:46:36.131699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.254968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.255026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:35.771 [2024-11-26 17:46:36.255048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.255059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:35.771 [2024-11-26 17:46:36.354182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:35.771 [2024-11-26 17:46:36.354352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:35.771 [2024-11-26 17:46:36.354444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:35.771 [2024-11-26 17:46:36.354641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:35.771 [2024-11-26 17:46:36.354719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:35.771 [2024-11-26 17:46:36.354797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.354862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:35.771 [2024-11-26 17:46:36.354873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:35.771 [2024-11-26 17:46:36.354886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:35.771 [2024-11-26 17:46:36.354896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:35.771 [2024-11-26 17:46:36.355031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 716.823 ms, result 0 00:49:35.771 true 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77899 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77899 ']' 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77899 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77899 00:49:35.771 killing process with pid 77899 00:49:35.771 Received shutdown signal, test time was about 4.000000 seconds 00:49:35.771 00:49:35.771 Latency(us) 00:49:35.771 [2024-11-26T17:46:36.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:35.771 [2024-11-26T17:46:36.465Z] =================================================================================================================== 00:49:35.771 [2024-11-26T17:46:36.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77899' 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77899 00:49:35.771 17:46:36 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77899 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:49:41.040 Remove shared memory files 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:49:41.040 ************************************ 00:49:41.040 END TEST ftl_bdevperf 00:49:41.040 ************************************ 00:49:41.040 00:49:41.040 real 0m27.164s 00:49:41.040 user 0m29.626s 00:49:41.040 sys 0m1.247s 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:49:41.040 17:46:41 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:49:41.040 17:46:41 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:49:41.040 17:46:41 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:49:41.040 17:46:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:49:41.040 17:46:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:49:41.040 ************************************ 00:49:41.040 START TEST ftl_trim 00:49:41.040 ************************************ 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:49:41.040 * Looking for test storage... 00:49:41.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:41.040 17:46:41 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:49:41.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.040 --rc genhtml_branch_coverage=1 00:49:41.040 --rc genhtml_function_coverage=1 00:49:41.040 --rc genhtml_legend=1 00:49:41.040 --rc geninfo_all_blocks=1 00:49:41.040 --rc geninfo_unexecuted_blocks=1 00:49:41.040 00:49:41.040 ' 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:49:41.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.040 --rc genhtml_branch_coverage=1 00:49:41.040 --rc genhtml_function_coverage=1 00:49:41.040 --rc genhtml_legend=1 00:49:41.040 --rc geninfo_all_blocks=1 00:49:41.040 --rc geninfo_unexecuted_blocks=1 00:49:41.040 00:49:41.040 ' 00:49:41.040 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:49:41.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.040 --rc genhtml_branch_coverage=1 00:49:41.040 --rc genhtml_function_coverage=1 00:49:41.040 --rc genhtml_legend=1 00:49:41.040 --rc geninfo_all_blocks=1 00:49:41.040 --rc geninfo_unexecuted_blocks=1 00:49:41.040 00:49:41.040 ' 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:49:41.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:41.041 --rc genhtml_branch_coverage=1 00:49:41.041 --rc genhtml_function_coverage=1 00:49:41.041 --rc genhtml_legend=1 00:49:41.041 --rc geninfo_all_blocks=1 00:49:41.041 --rc geninfo_unexecuted_blocks=1 00:49:41.041 00:49:41.041 ' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78263 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:49:41.041 17:46:41 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78263 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78263 ']' 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:41.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:49:41.041 17:46:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:49:41.300 [2024-11-26 17:46:41.745491] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:49:41.300 [2024-11-26 17:46:41.745650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78263 ] 00:49:41.300 [2024-11-26 17:46:41.933497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:41.564 [2024-11-26 17:46:42.082066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:41.564 [2024-11-26 17:46:42.082220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:41.564 [2024-11-26 17:46:42.082268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:49:42.515 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:49:42.515 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:49:42.515 17:46:43 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:49:42.774 17:46:43 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:49:42.774 17:46:43 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:49:42.774 17:46:43 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:49:42.774 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:49:42.774 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:42.774 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:49:42.774 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:49:42.774 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:49:43.034 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:43.034 { 00:49:43.034 "name": "nvme0n1", 00:49:43.034 "aliases": [ 00:49:43.034 "652d648f-0407-4461-8c94-44adf32edf71" 00:49:43.034 ], 00:49:43.034 "product_name": "NVMe disk", 00:49:43.034 "block_size": 4096, 00:49:43.034 "num_blocks": 1310720, 00:49:43.034 "uuid": "652d648f-0407-4461-8c94-44adf32edf71", 00:49:43.034 "numa_id": -1, 00:49:43.034 "assigned_rate_limits": { 00:49:43.034 "rw_ios_per_sec": 0, 00:49:43.034 "rw_mbytes_per_sec": 0, 00:49:43.034 "r_mbytes_per_sec": 0, 00:49:43.034 "w_mbytes_per_sec": 0 00:49:43.034 }, 00:49:43.034 "claimed": true, 00:49:43.034 "claim_type": "read_many_write_one", 00:49:43.034 "zoned": false, 00:49:43.034 "supported_io_types": { 00:49:43.034 "read": true, 00:49:43.034 "write": true, 00:49:43.034 "unmap": true, 00:49:43.034 "flush": true, 00:49:43.034 "reset": true, 00:49:43.034 "nvme_admin": true, 00:49:43.034 "nvme_io": true, 00:49:43.034 "nvme_io_md": false, 00:49:43.034 "write_zeroes": true, 00:49:43.034 "zcopy": false, 00:49:43.034 "get_zone_info": false, 00:49:43.034 "zone_management": false, 00:49:43.034 "zone_append": false, 00:49:43.034 "compare": true, 00:49:43.034 "compare_and_write": false, 00:49:43.034 "abort": true, 00:49:43.034 "seek_hole": false, 00:49:43.034 "seek_data": false, 00:49:43.034 "copy": true, 00:49:43.034 "nvme_iov_md": false 00:49:43.034 }, 00:49:43.034 "driver_specific": { 00:49:43.034 "nvme": [ 00:49:43.034 { 00:49:43.034 "pci_address": "0000:00:11.0", 00:49:43.034 "trid": { 00:49:43.034 "trtype": "PCIe", 00:49:43.034 "traddr": "0000:00:11.0" 00:49:43.034 }, 00:49:43.034 "ctrlr_data": { 00:49:43.034 "cntlid": 0, 00:49:43.034 "vendor_id": "0x1b36", 00:49:43.034 "model_number": "QEMU NVMe Ctrl", 00:49:43.034 "serial_number": "12341", 00:49:43.034 "firmware_revision": "8.0.0", 00:49:43.034 "subnqn": "nqn.2019-08.org.qemu:12341", 00:49:43.034 "oacs": { 00:49:43.034 "security": 0, 00:49:43.034 "format": 1, 00:49:43.034 "firmware": 0, 00:49:43.034 "ns_manage": 1 00:49:43.034 }, 00:49:43.034 "multi_ctrlr": false, 00:49:43.034 "ana_reporting": false 00:49:43.034 }, 00:49:43.034 "vs": { 00:49:43.034 "nvme_version": "1.4" 00:49:43.034 }, 00:49:43.034 "ns_data": { 00:49:43.034 "id": 1, 00:49:43.034 "can_share": false 00:49:43.034 } 00:49:43.034 } 00:49:43.034 ], 00:49:43.034 "mp_policy": "active_passive" 00:49:43.034 } 00:49:43.034 } 00:49:43.034 ]' 00:49:43.034 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:43.034 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:49:43.034 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:43.294 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:49:43.294 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:49:43.294 17:46:43 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ee070088-d705-4023-b157-c2545c59ce22 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:49:43.294 17:46:43 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ee070088-d705-4023-b157-c2545c59ce22 00:49:43.553 17:46:44 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:49:43.813 17:46:44 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=2eb553cf-fc9f-40a8-861f-795083f03615 00:49:43.813 17:46:44 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2eb553cf-fc9f-40a8-861f-795083f03615 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:49:44.072 17:46:44 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.072 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.072 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:44.072 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:49:44.072 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:49:44.072 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:44.332 { 00:49:44.332 "name": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:44.332 "aliases": [ 00:49:44.332 "lvs/nvme0n1p0" 00:49:44.332 ], 00:49:44.332 "product_name": "Logical Volume", 00:49:44.332 "block_size": 4096, 00:49:44.332 "num_blocks": 26476544, 00:49:44.332 "uuid": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:44.332 "assigned_rate_limits": { 00:49:44.332 "rw_ios_per_sec": 0, 00:49:44.332 "rw_mbytes_per_sec": 0, 00:49:44.332 "r_mbytes_per_sec": 0, 00:49:44.332 "w_mbytes_per_sec": 0 00:49:44.332 }, 00:49:44.332 "claimed": false, 00:49:44.332 "zoned": false, 00:49:44.332 "supported_io_types": { 00:49:44.332 "read": true, 00:49:44.332 "write": true, 00:49:44.332 "unmap": true, 00:49:44.332 "flush": false, 00:49:44.332 "reset": true, 00:49:44.332 "nvme_admin": false, 00:49:44.332 "nvme_io": false, 00:49:44.332 "nvme_io_md": false, 00:49:44.332 "write_zeroes": true, 00:49:44.332 "zcopy": false, 00:49:44.332 "get_zone_info": false, 00:49:44.332 "zone_management": false, 00:49:44.332 "zone_append": false, 00:49:44.332 "compare": false, 00:49:44.332 "compare_and_write": false, 00:49:44.332 "abort": false, 00:49:44.332 "seek_hole": true, 00:49:44.332 "seek_data": true, 00:49:44.332 "copy": false, 00:49:44.332 "nvme_iov_md": false 00:49:44.332 }, 00:49:44.332 "driver_specific": { 00:49:44.332 "lvol": { 00:49:44.332 "lvol_store_uuid": "2eb553cf-fc9f-40a8-861f-795083f03615", 00:49:44.332 "base_bdev": "nvme0n1", 00:49:44.332 "thin_provision": true, 00:49:44.332 "num_allocated_clusters": 0, 00:49:44.332 "snapshot": false, 00:49:44.332 "clone": false, 00:49:44.332 "esnap_clone": false 00:49:44.332 } 00:49:44.332 } 00:49:44.332 } 00:49:44.332 ]' 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:44.332 17:46:44 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:49:44.332 17:46:44 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:49:44.332 17:46:44 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:49:44.332 17:46:44 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:49:44.592 17:46:45 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:49:44.592 17:46:45 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:49:44.592 17:46:45 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.592 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.592 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:44.592 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:49:44.592 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:49:44.592 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:44.851 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:44.851 { 00:49:44.851 "name": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:44.851 "aliases": [ 00:49:44.851 "lvs/nvme0n1p0" 00:49:44.851 ], 00:49:44.851 "product_name": "Logical Volume", 00:49:44.852 "block_size": 4096, 00:49:44.852 "num_blocks": 26476544, 00:49:44.852 "uuid": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:44.852 "assigned_rate_limits": { 00:49:44.852 "rw_ios_per_sec": 0, 00:49:44.852 "rw_mbytes_per_sec": 0, 00:49:44.852 "r_mbytes_per_sec": 0, 00:49:44.852 "w_mbytes_per_sec": 0 00:49:44.852 }, 00:49:44.852 "claimed": false, 00:49:44.852 "zoned": false, 00:49:44.852 "supported_io_types": { 00:49:44.852 "read": true, 00:49:44.852 "write": true, 00:49:44.852 "unmap": true, 00:49:44.852 "flush": false, 00:49:44.852 "reset": true, 00:49:44.852 "nvme_admin": false, 00:49:44.852 "nvme_io": false, 00:49:44.852 "nvme_io_md": false, 00:49:44.852 "write_zeroes": true, 00:49:44.852 "zcopy": false, 00:49:44.852 "get_zone_info": false, 00:49:44.852 "zone_management": false, 00:49:44.852 "zone_append": false, 00:49:44.852 "compare": false, 00:49:44.852 "compare_and_write": false, 00:49:44.852 "abort": false, 00:49:44.852 "seek_hole": true, 00:49:44.852 "seek_data": true, 00:49:44.852 "copy": false, 00:49:44.852 "nvme_iov_md": false 00:49:44.852 }, 00:49:44.852 "driver_specific": { 00:49:44.852 "lvol": { 00:49:44.852 "lvol_store_uuid": "2eb553cf-fc9f-40a8-861f-795083f03615", 00:49:44.852 "base_bdev": "nvme0n1", 00:49:44.852 "thin_provision": true, 00:49:44.852 "num_allocated_clusters": 0, 00:49:44.852 "snapshot": false, 00:49:44.852 "clone": false, 00:49:44.852 "esnap_clone": false 00:49:44.852 } 00:49:44.852 } 00:49:44.852 } 00:49:44.852 ]' 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:44.852 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:49:44.852 17:46:45 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:49:44.852 17:46:45 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:49:45.111 17:46:45 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:49:45.111 17:46:45 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:49:45.111 17:46:45 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:45.111 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:45.111 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:49:45.111 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:49:45.111 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:49:45.111 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d58f181d-29c1-4b0e-9f79-b26dc09fefa2 00:49:45.370 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:49:45.370 { 00:49:45.370 "name": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:45.370 "aliases": [ 00:49:45.370 "lvs/nvme0n1p0" 00:49:45.370 ], 00:49:45.370 "product_name": "Logical Volume", 00:49:45.370 "block_size": 4096, 00:49:45.370 "num_blocks": 26476544, 00:49:45.370 "uuid": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:45.370 "assigned_rate_limits": { 00:49:45.370 "rw_ios_per_sec": 0, 00:49:45.370 "rw_mbytes_per_sec": 0, 00:49:45.370 "r_mbytes_per_sec": 0, 00:49:45.370 "w_mbytes_per_sec": 0 00:49:45.370 }, 00:49:45.370 "claimed": false, 00:49:45.370 "zoned": false, 00:49:45.370 "supported_io_types": { 00:49:45.370 "read": true, 00:49:45.370 "write": true, 00:49:45.370 "unmap": true, 00:49:45.370 "flush": false, 00:49:45.370 "reset": true, 00:49:45.370 "nvme_admin": false, 00:49:45.370 "nvme_io": false, 00:49:45.370 "nvme_io_md": false, 00:49:45.370 "write_zeroes": true, 00:49:45.370 "zcopy": false, 00:49:45.370 "get_zone_info": false, 00:49:45.370 "zone_management": false, 00:49:45.370 "zone_append": false, 00:49:45.370 "compare": false, 00:49:45.370 "compare_and_write": false, 00:49:45.370 "abort": false, 00:49:45.370 "seek_hole": true, 00:49:45.370 "seek_data": true, 00:49:45.370 "copy": false, 00:49:45.370 "nvme_iov_md": false 00:49:45.370 }, 00:49:45.370 "driver_specific": { 00:49:45.370 "lvol": { 00:49:45.370 "lvol_store_uuid": "2eb553cf-fc9f-40a8-861f-795083f03615", 00:49:45.370 "base_bdev": "nvme0n1", 00:49:45.370 "thin_provision": true, 00:49:45.370 "num_allocated_clusters": 0, 00:49:45.370 "snapshot": false, 00:49:45.370 "clone": false, 00:49:45.370 "esnap_clone": false 00:49:45.370 } 00:49:45.370 } 00:49:45.370 } 00:49:45.370 ]' 00:49:45.370 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:49:45.370 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:49:45.370 17:46:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:49:45.370 17:46:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:49:45.370 17:46:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:49:45.370 17:46:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:49:45.370 17:46:46 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:49:45.370 17:46:46 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d58f181d-29c1-4b0e-9f79-b26dc09fefa2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:49:45.631 [2024-11-26 17:46:46.180240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.631 [2024-11-26 17:46:46.180456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:45.632 [2024-11-26 17:46:46.180510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:45.632 [2024-11-26 17:46:46.180523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.184424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.184466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:45.632 [2024-11-26 17:46:46.184483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.858 ms 00:49:45.632 [2024-11-26 17:46:46.184504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.184637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:45.632 [2024-11-26 17:46:46.185714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:45.632 [2024-11-26 17:46:46.185753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.185765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:45.632 [2024-11-26 17:46:46.185779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:49:45.632 [2024-11-26 17:46:46.185790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.185917] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:49:45.632 [2024-11-26 17:46:46.188430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.188469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:49:45.632 [2024-11-26 17:46:46.188483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:49:45.632 [2024-11-26 17:46:46.188507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.203893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.203949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:45.632 [2024-11-26 17:46:46.203966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.287 ms 00:49:45.632 [2024-11-26 17:46:46.203981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.204193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.204214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:45.632 [2024-11-26 17:46:46.204226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:49:45.632 [2024-11-26 17:46:46.204247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.204296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.204311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:45.632 [2024-11-26 17:46:46.204322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:49:45.632 [2024-11-26 17:46:46.204340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.204387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:49:45.632 [2024-11-26 17:46:46.210828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.210968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:45.632 [2024-11-26 17:46:46.210995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.455 ms 00:49:45.632 [2024-11-26 17:46:46.211006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.211088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.211118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:45.632 [2024-11-26 17:46:46.211134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:49:45.632 [2024-11-26 17:46:46.211145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.211188] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:49:45.632 [2024-11-26 17:46:46.211332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:45.632 [2024-11-26 17:46:46.211354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:45.632 [2024-11-26 17:46:46.211369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:45.632 [2024-11-26 17:46:46.211395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:45.632 [2024-11-26 17:46:46.211408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:45.632 [2024-11-26 17:46:46.211423] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:49:45.632 [2024-11-26 17:46:46.211434] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:45.632 [2024-11-26 17:46:46.211451] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:45.632 [2024-11-26 17:46:46.211462] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:45.632 [2024-11-26 17:46:46.211476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.211487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:45.632 [2024-11-26 17:46:46.211515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:49:45.632 [2024-11-26 17:46:46.211527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.211621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.632 [2024-11-26 17:46:46.211632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:45.632 [2024-11-26 17:46:46.211647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:49:45.632 [2024-11-26 17:46:46.211658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.632 [2024-11-26 17:46:46.211805] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:45.632 [2024-11-26 17:46:46.211818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:45.632 [2024-11-26 17:46:46.211833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:45.632 [2024-11-26 17:46:46.211844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.632 [2024-11-26 17:46:46.211858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:45.632 [2024-11-26 17:46:46.211867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:45.632 [2024-11-26 17:46:46.211880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:49:45.632 [2024-11-26 17:46:46.211889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:45.632 [2024-11-26 17:46:46.211902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:49:45.632 [2024-11-26 17:46:46.211912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:45.632 [2024-11-26 17:46:46.211924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:45.632 [2024-11-26 17:46:46.211934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:49:45.632 [2024-11-26 17:46:46.211948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:45.632 [2024-11-26 17:46:46.211957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:45.632 [2024-11-26 17:46:46.211970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:49:45.632 [2024-11-26 17:46:46.211982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.632 [2024-11-26 17:46:46.211998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:45.632 [2024-11-26 17:46:46.212008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:49:45.632 [2024-11-26 17:46:46.212022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:45.632 [2024-11-26 17:46:46.212045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:45.632 [2024-11-26 17:46:46.212067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:45.632 [2024-11-26 17:46:46.212076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:45.632 [2024-11-26 17:46:46.212099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:45.632 [2024-11-26 17:46:46.212111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:45.632 [2024-11-26 17:46:46.212133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:45.632 [2024-11-26 17:46:46.212142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:45.632 [2024-11-26 17:46:46.212163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:45.632 [2024-11-26 17:46:46.212179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:45.632 [2024-11-26 17:46:46.212200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:45.632 [2024-11-26 17:46:46.212210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:49:45.632 [2024-11-26 17:46:46.212222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:45.632 [2024-11-26 17:46:46.212231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:45.632 [2024-11-26 17:46:46.212243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:49:45.632 [2024-11-26 17:46:46.212252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.632 [2024-11-26 17:46:46.212264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:45.632 [2024-11-26 17:46:46.212274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:49:45.632 [2024-11-26 17:46:46.212286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.633 [2024-11-26 17:46:46.212296] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:45.633 [2024-11-26 17:46:46.212310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:45.633 [2024-11-26 17:46:46.212320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:45.633 [2024-11-26 17:46:46.212335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:45.633 [2024-11-26 17:46:46.212349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:45.633 [2024-11-26 17:46:46.212365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:45.633 [2024-11-26 17:46:46.212375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:45.633 [2024-11-26 17:46:46.212389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:45.633 [2024-11-26 17:46:46.212399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:45.633 [2024-11-26 17:46:46.212411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:45.633 [2024-11-26 17:46:46.212427] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:45.633 [2024-11-26 17:46:46.212444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:49:45.633 [2024-11-26 17:46:46.212477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:49:45.633 [2024-11-26 17:46:46.212488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:49:45.633 [2024-11-26 17:46:46.212513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:49:45.633 [2024-11-26 17:46:46.212524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:49:45.633 [2024-11-26 17:46:46.212538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:49:45.633 [2024-11-26 17:46:46.212549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:49:45.633 [2024-11-26 17:46:46.212563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:49:45.633 [2024-11-26 17:46:46.212574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:49:45.633 [2024-11-26 17:46:46.212591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:49:45.633 [2024-11-26 17:46:46.212652] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:45.633 [2024-11-26 17:46:46.212667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:45.633 [2024-11-26 17:46:46.212693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:45.633 [2024-11-26 17:46:46.212703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:45.633 [2024-11-26 17:46:46.212718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:45.633 [2024-11-26 17:46:46.212730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:45.633 [2024-11-26 17:46:46.212744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:45.633 [2024-11-26 17:46:46.212754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:49:45.633 [2024-11-26 17:46:46.212768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:45.633 [2024-11-26 17:46:46.212872] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:49:45.633 [2024-11-26 17:46:46.212893] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:49:49.863 [2024-11-26 17:46:50.371127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.371427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:49:49.863 [2024-11-26 17:46:50.371554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4165.003 ms 00:49:49.863 [2024-11-26 17:46:50.371601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.417689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.417942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:49.863 [2024-11-26 17:46:50.418035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.706 ms 00:49:49.863 [2024-11-26 17:46:50.418077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.418313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.418465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:49.863 [2024-11-26 17:46:50.418627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:49:49.863 [2024-11-26 17:46:50.418672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.483724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.483903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:49.863 [2024-11-26 17:46:50.484022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.087 ms 00:49:49.863 [2024-11-26 17:46:50.484069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.484210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.484313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:49.863 [2024-11-26 17:46:50.484353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:49:49.863 [2024-11-26 17:46:50.484387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.485298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.485426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:49.863 [2024-11-26 17:46:50.485524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:49:49.863 [2024-11-26 17:46:50.485572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.485780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.485824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:49.863 [2024-11-26 17:46:50.486015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:49:49.863 [2024-11-26 17:46:50.486061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.513078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.513225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:49.863 [2024-11-26 17:46:50.513373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.998 ms 00:49:49.863 [2024-11-26 17:46:50.513417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:49.863 [2024-11-26 17:46:50.527900] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:49:49.863 [2024-11-26 17:46:50.555480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:49.863 [2024-11-26 17:46:50.555699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:49.863 [2024-11-26 17:46:50.555858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.927 ms 00:49:49.863 [2024-11-26 17:46:50.555898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.123 [2024-11-26 17:46:50.666078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.123 [2024-11-26 17:46:50.666304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:49:50.123 [2024-11-26 17:46:50.666414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.159 ms 00:49:50.123 [2024-11-26 17:46:50.666453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.123 [2024-11-26 17:46:50.666790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.123 [2024-11-26 17:46:50.666841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:50.123 [2024-11-26 17:46:50.666929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:49:50.123 [2024-11-26 17:46:50.666964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.123 [2024-11-26 17:46:50.705186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.123 [2024-11-26 17:46:50.705328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:49:50.123 [2024-11-26 17:46:50.705452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.205 ms 00:49:50.123 [2024-11-26 17:46:50.705490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.123 [2024-11-26 17:46:50.742436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.123 [2024-11-26 17:46:50.742592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:49:50.123 [2024-11-26 17:46:50.742740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.875 ms 00:49:50.123 [2024-11-26 17:46:50.742773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.123 [2024-11-26 17:46:50.743551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.123 [2024-11-26 17:46:50.743674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:50.123 [2024-11-26 17:46:50.743700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:49:50.123 [2024-11-26 17:46:50.743711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.852916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.852973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:49:50.382 [2024-11-26 17:46:50.853000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.322 ms 00:49:50.382 [2024-11-26 17:46:50.853011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.892501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.892671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:49:50.382 [2024-11-26 17:46:50.892702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.412 ms 00:49:50.382 [2024-11-26 17:46:50.892718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.930878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.930920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:49:50.382 [2024-11-26 17:46:50.930940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.118 ms 00:49:50.382 [2024-11-26 17:46:50.930950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.967674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.967733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:50.382 [2024-11-26 17:46:50.967752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.683 ms 00:49:50.382 [2024-11-26 17:46:50.967763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.967876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.967890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:50.382 [2024-11-26 17:46:50.967909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:49:50.382 [2024-11-26 17:46:50.967920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.968030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:50.382 [2024-11-26 17:46:50.968043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:50.382 [2024-11-26 17:46:50.968057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:49:50.382 [2024-11-26 17:46:50.968068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:50.382 [2024-11-26 17:46:50.969435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:50.382 [2024-11-26 17:46:50.973739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4796.656 ms, result 0 00:49:50.382 [2024-11-26 17:46:50.974868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:49:50.382 "name": "ftl0", 00:49:50.382 "uuid": "374db20d-0c07-4115-a3e3-8f48851ecd1a" 00:49:50.382 } 00:49:50.383 p_thread 00:49:50.383 17:46:51 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:49:50.383 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:49:50.641 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:49:50.900 [ 00:49:50.900 { 00:49:50.900 "name": "ftl0", 00:49:50.900 "aliases": [ 00:49:50.900 "374db20d-0c07-4115-a3e3-8f48851ecd1a" 00:49:50.900 ], 00:49:50.900 "product_name": "FTL disk", 00:49:50.900 "block_size": 4096, 00:49:50.900 "num_blocks": 23592960, 00:49:50.900 "uuid": "374db20d-0c07-4115-a3e3-8f48851ecd1a", 00:49:50.900 "assigned_rate_limits": { 00:49:50.900 "rw_ios_per_sec": 0, 00:49:50.900 "rw_mbytes_per_sec": 0, 00:49:50.900 "r_mbytes_per_sec": 0, 00:49:50.900 "w_mbytes_per_sec": 0 00:49:50.900 }, 00:49:50.900 "claimed": false, 00:49:50.900 "zoned": false, 00:49:50.900 "supported_io_types": { 00:49:50.900 "read": true, 00:49:50.900 "write": true, 00:49:50.900 "unmap": true, 00:49:50.900 "flush": true, 00:49:50.900 "reset": false, 00:49:50.900 "nvme_admin": false, 00:49:50.900 "nvme_io": false, 00:49:50.900 "nvme_io_md": false, 00:49:50.900 "write_zeroes": true, 00:49:50.900 "zcopy": false, 00:49:50.900 "get_zone_info": false, 00:49:50.900 "zone_management": false, 00:49:50.900 "zone_append": false, 00:49:50.900 "compare": false, 00:49:50.900 "compare_and_write": false, 00:49:50.900 "abort": false, 00:49:50.900 "seek_hole": false, 00:49:50.900 "seek_data": false, 00:49:50.900 "copy": false, 00:49:50.900 "nvme_iov_md": false 00:49:50.900 }, 00:49:50.900 "driver_specific": { 00:49:50.900 "ftl": { 00:49:50.900 "base_bdev": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:50.900 "cache": "nvc0n1p0" 00:49:50.900 } 00:49:50.900 } 00:49:50.900 } 00:49:50.900 ] 00:49:50.900 17:46:51 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:49:50.900 17:46:51 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:49:50.900 17:46:51 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:49:51.160 17:46:51 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:49:51.160 17:46:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:49:51.160 17:46:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:49:51.160 { 00:49:51.160 "name": "ftl0", 00:49:51.160 "aliases": [ 00:49:51.160 "374db20d-0c07-4115-a3e3-8f48851ecd1a" 00:49:51.160 ], 00:49:51.160 "product_name": "FTL disk", 00:49:51.160 "block_size": 4096, 00:49:51.160 "num_blocks": 23592960, 00:49:51.160 "uuid": "374db20d-0c07-4115-a3e3-8f48851ecd1a", 00:49:51.160 "assigned_rate_limits": { 00:49:51.160 "rw_ios_per_sec": 0, 00:49:51.160 "rw_mbytes_per_sec": 0, 00:49:51.160 "r_mbytes_per_sec": 0, 00:49:51.160 "w_mbytes_per_sec": 0 00:49:51.160 }, 00:49:51.160 "claimed": false, 00:49:51.160 "zoned": false, 00:49:51.160 "supported_io_types": { 00:49:51.160 "read": true, 00:49:51.160 "write": true, 00:49:51.160 "unmap": true, 00:49:51.160 "flush": true, 00:49:51.160 "reset": false, 00:49:51.160 "nvme_admin": false, 00:49:51.160 "nvme_io": false, 00:49:51.160 "nvme_io_md": false, 00:49:51.160 "write_zeroes": true, 00:49:51.160 "zcopy": false, 00:49:51.160 "get_zone_info": false, 00:49:51.160 "zone_management": false, 00:49:51.160 "zone_append": false, 00:49:51.160 "compare": false, 00:49:51.160 "compare_and_write": false, 00:49:51.160 "abort": false, 00:49:51.160 "seek_hole": false, 00:49:51.160 "seek_data": false, 00:49:51.160 "copy": false, 00:49:51.160 "nvme_iov_md": false 00:49:51.160 }, 00:49:51.160 "driver_specific": { 00:49:51.160 "ftl": { 00:49:51.161 "base_bdev": "d58f181d-29c1-4b0e-9f79-b26dc09fefa2", 00:49:51.161 "cache": "nvc0n1p0" 00:49:51.161 } 00:49:51.161 } 00:49:51.161 } 00:49:51.161 ]' 00:49:51.161 17:46:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:49:51.421 17:46:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:49:51.421 17:46:51 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:49:51.421 [2024-11-26 17:46:52.031348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.031433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:49:51.421 [2024-11-26 17:46:52.031454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:49:51.421 [2024-11-26 17:46:52.031469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.031536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:49:51.421 [2024-11-26 17:46:52.036407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.036442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:49:51.421 [2024-11-26 17:46:52.036467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:49:51.421 [2024-11-26 17:46:52.036478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.037186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.037205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:49:51.421 [2024-11-26 17:46:52.037221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.624 ms 00:49:51.421 [2024-11-26 17:46:52.037235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.040079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.040230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:49:51.421 [2024-11-26 17:46:52.040258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:49:51.421 [2024-11-26 17:46:52.040269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.045935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.045967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:49:51.421 [2024-11-26 17:46:52.045982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.597 ms 00:49:51.421 [2024-11-26 17:46:52.045993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.085652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.085824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:49:51.421 [2024-11-26 17:46:52.085860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.623 ms 00:49:51.421 [2024-11-26 17:46:52.085872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.110063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.110117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:49:51.421 [2024-11-26 17:46:52.110138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:49:51.421 [2024-11-26 17:46:52.110150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.421 [2024-11-26 17:46:52.110426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.421 [2024-11-26 17:46:52.110442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:49:51.421 [2024-11-26 17:46:52.110458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:49:51.421 [2024-11-26 17:46:52.110469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.683 [2024-11-26 17:46:52.147789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.683 [2024-11-26 17:46:52.147832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:49:51.683 [2024-11-26 17:46:52.147851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.318 ms 00:49:51.683 [2024-11-26 17:46:52.147862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.683 [2024-11-26 17:46:52.186401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.683 [2024-11-26 17:46:52.186463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:49:51.683 [2024-11-26 17:46:52.186489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.479 ms 00:49:51.683 [2024-11-26 17:46:52.186512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.683 [2024-11-26 17:46:52.221947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.683 [2024-11-26 17:46:52.221987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:49:51.683 [2024-11-26 17:46:52.222006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.382 ms 00:49:51.683 [2024-11-26 17:46:52.222016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.683 [2024-11-26 17:46:52.259478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.683 [2024-11-26 17:46:52.259554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:49:51.683 [2024-11-26 17:46:52.259577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.358 ms 00:49:51.683 [2024-11-26 17:46:52.259588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.683 [2024-11-26 17:46:52.259706] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:49:51.683 [2024-11-26 17:46:52.259730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.259975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:49:51.683 [2024-11-26 17:46:52.260394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.260960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.261989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:49:51.684 [2024-11-26 17:46:52.262261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:49:51.684 [2024-11-26 17:46:52.262280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:49:51.684 [2024-11-26 17:46:52.262293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:49:51.684 [2024-11-26 17:46:52.262311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:49:51.684 [2024-11-26 17:46:52.262322] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:49:51.684 [2024-11-26 17:46:52.262337] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:49:51.684 [2024-11-26 17:46:52.262347] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:49:51.684 [2024-11-26 17:46:52.262363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:49:51.684 [2024-11-26 17:46:52.262374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:49:51.684 [2024-11-26 17:46:52.262387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:49:51.684 [2024-11-26 17:46:52.262396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:49:51.684 [2024-11-26 17:46:52.262411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.684 [2024-11-26 17:46:52.262423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:49:51.684 [2024-11-26 17:46:52.262439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:49:51.684 [2024-11-26 17:46:52.262450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.684 [2024-11-26 17:46:52.285091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.684 [2024-11-26 17:46:52.285135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:49:51.684 [2024-11-26 17:46:52.285158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.612 ms 00:49:51.684 [2024-11-26 17:46:52.285169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.684 [2024-11-26 17:46:52.285881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:51.684 [2024-11-26 17:46:52.285901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:49:51.684 [2024-11-26 17:46:52.285917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:49:51.684 [2024-11-26 17:46:52.285929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.685 [2024-11-26 17:46:52.360492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.685 [2024-11-26 17:46:52.360543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:51.685 [2024-11-26 17:46:52.360561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.685 [2024-11-26 17:46:52.360574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.685 [2024-11-26 17:46:52.360741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.685 [2024-11-26 17:46:52.360754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:51.685 [2024-11-26 17:46:52.360769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.685 [2024-11-26 17:46:52.360780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.685 [2024-11-26 17:46:52.360878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.685 [2024-11-26 17:46:52.360892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:51.685 [2024-11-26 17:46:52.360910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.685 [2024-11-26 17:46:52.360922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.685 [2024-11-26 17:46:52.360966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.685 [2024-11-26 17:46:52.360978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:51.685 [2024-11-26 17:46:52.360992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.685 [2024-11-26 17:46:52.361002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.507491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.507576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:51.945 [2024-11-26 17:46:52.507597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.507609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.616410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.616482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:51.945 [2024-11-26 17:46:52.616519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.616532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.616739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.616758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:51.945 [2024-11-26 17:46:52.616778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.616789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.616862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.616874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:51.945 [2024-11-26 17:46:52.616889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.616899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.617055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.617069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:51.945 [2024-11-26 17:46:52.617088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.617099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.617173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.617186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:49:51.945 [2024-11-26 17:46:52.617200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.617210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.617282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.617293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:51.945 [2024-11-26 17:46:52.617315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.617325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.617402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:49:51.945 [2024-11-26 17:46:52.617424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:51.945 [2024-11-26 17:46:52.617439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:49:51.945 [2024-11-26 17:46:52.617449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:51.945 [2024-11-26 17:46:52.617704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 587.291 ms, result 0 00:49:51.945 true 00:49:52.205 17:46:52 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78263 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78263 ']' 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78263 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78263 00:49:52.205 killing process with pid 78263 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78263' 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78263 00:49:52.205 17:46:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78263 00:49:57.480 17:46:57 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:49:58.419 65536+0 records in 00:49:58.419 65536+0 records out 00:49:58.419 268435456 bytes (268 MB, 256 MiB) copied, 0.966862 s, 278 MB/s 00:49:58.419 17:46:58 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:49:58.419 [2024-11-26 17:46:58.939236] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:49:58.419 [2024-11-26 17:46:58.939372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78483 ] 00:49:58.677 [2024-11-26 17:46:59.123555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:58.677 [2024-11-26 17:46:59.267407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:59.305 [2024-11-26 17:46:59.697164] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:59.305 [2024-11-26 17:46:59.697247] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:49:59.305 [2024-11-26 17:46:59.864789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.305 [2024-11-26 17:46:59.864851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:49:59.305 [2024-11-26 17:46:59.864869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:59.305 [2024-11-26 17:46:59.864880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.305 [2024-11-26 17:46:59.868434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.305 [2024-11-26 17:46:59.868605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:49:59.305 [2024-11-26 17:46:59.868628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.538 ms 00:49:59.305 [2024-11-26 17:46:59.868640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.868824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:49:59.306 [2024-11-26 17:46:59.869820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:49:59.306 [2024-11-26 17:46:59.869856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.869869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:49:59.306 [2024-11-26 17:46:59.869880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:49:59.306 [2024-11-26 17:46:59.869891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.872394] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:49:59.306 [2024-11-26 17:46:59.892884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.892921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:49:59.306 [2024-11-26 17:46:59.892936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.523 ms 00:49:59.306 [2024-11-26 17:46:59.892947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.893052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.893068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:49:59.306 [2024-11-26 17:46:59.893081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:49:59.306 [2024-11-26 17:46:59.893092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.905470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.905504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:49:59.306 [2024-11-26 17:46:59.905518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.353 ms 00:49:59.306 [2024-11-26 17:46:59.905529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.905657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.905674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:49:59.306 [2024-11-26 17:46:59.905685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:49:59.306 [2024-11-26 17:46:59.905696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.905733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.905745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:49:59.306 [2024-11-26 17:46:59.905755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:49:59.306 [2024-11-26 17:46:59.905766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.905791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:49:59.306 [2024-11-26 17:46:59.911641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.911672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:49:59.306 [2024-11-26 17:46:59.911684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.867 ms 00:49:59.306 [2024-11-26 17:46:59.911695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.911748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.911760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:49:59.306 [2024-11-26 17:46:59.911771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:49:59.306 [2024-11-26 17:46:59.911781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.911807] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:49:59.306 [2024-11-26 17:46:59.911832] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:49:59.306 [2024-11-26 17:46:59.911870] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:49:59.306 [2024-11-26 17:46:59.911889] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:49:59.306 [2024-11-26 17:46:59.911996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:49:59.306 [2024-11-26 17:46:59.912016] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:49:59.306 [2024-11-26 17:46:59.912030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:49:59.306 [2024-11-26 17:46:59.912049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912061] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912073] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:49:59.306 [2024-11-26 17:46:59.912083] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:49:59.306 [2024-11-26 17:46:59.912093] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:49:59.306 [2024-11-26 17:46:59.912103] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:49:59.306 [2024-11-26 17:46:59.912115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.912126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:49:59.306 [2024-11-26 17:46:59.912137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:49:59.306 [2024-11-26 17:46:59.912147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.912239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.306 [2024-11-26 17:46:59.912255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:49:59.306 [2024-11-26 17:46:59.912266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:49:59.306 [2024-11-26 17:46:59.912276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.306 [2024-11-26 17:46:59.912371] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:49:59.306 [2024-11-26 17:46:59.912385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:49:59.306 [2024-11-26 17:46:59.912396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:49:59.306 [2024-11-26 17:46:59.912429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:49:59.306 [2024-11-26 17:46:59.912469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:59.306 [2024-11-26 17:46:59.912488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:49:59.306 [2024-11-26 17:46:59.912523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:49:59.306 [2024-11-26 17:46:59.912533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:49:59.306 [2024-11-26 17:46:59.912542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:49:59.306 [2024-11-26 17:46:59.912551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:49:59.306 [2024-11-26 17:46:59.912561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:49:59.306 [2024-11-26 17:46:59.912579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:49:59.306 [2024-11-26 17:46:59.912605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:49:59.306 [2024-11-26 17:46:59.912632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:49:59.306 [2024-11-26 17:46:59.912659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:49:59.306 [2024-11-26 17:46:59.912684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:49:59.306 [2024-11-26 17:46:59.912709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:59.306 [2024-11-26 17:46:59.912727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:49:59.306 [2024-11-26 17:46:59.912735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:49:59.306 [2024-11-26 17:46:59.912744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:49:59.306 [2024-11-26 17:46:59.912752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:49:59.306 [2024-11-26 17:46:59.912760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:49:59.306 [2024-11-26 17:46:59.912768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:49:59.306 [2024-11-26 17:46:59.912785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:49:59.306 [2024-11-26 17:46:59.912796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912805] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:49:59.306 [2024-11-26 17:46:59.912816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:49:59.306 [2024-11-26 17:46:59.912830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:49:59.306 [2024-11-26 17:46:59.912849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:49:59.306 [2024-11-26 17:46:59.912858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:49:59.306 [2024-11-26 17:46:59.912867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:49:59.306 [2024-11-26 17:46:59.912876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:49:59.306 [2024-11-26 17:46:59.912885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:49:59.306 [2024-11-26 17:46:59.912894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:49:59.306 [2024-11-26 17:46:59.912904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:49:59.306 [2024-11-26 17:46:59.912933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:59.306 [2024-11-26 17:46:59.912945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:49:59.306 [2024-11-26 17:46:59.912955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:49:59.306 [2024-11-26 17:46:59.912966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:49:59.306 [2024-11-26 17:46:59.912976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:49:59.306 [2024-11-26 17:46:59.912986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:49:59.307 [2024-11-26 17:46:59.912996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:49:59.307 [2024-11-26 17:46:59.913007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:49:59.307 [2024-11-26 17:46:59.913017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:49:59.307 [2024-11-26 17:46:59.913027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:49:59.307 [2024-11-26 17:46:59.913038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:49:59.307 [2024-11-26 17:46:59.913088] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:49:59.307 [2024-11-26 17:46:59.913100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:59.307 [2024-11-26 17:46:59.913139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:49:59.307 [2024-11-26 17:46:59.913150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:49:59.307 [2024-11-26 17:46:59.913162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:49:59.307 [2024-11-26 17:46:59.913173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.307 [2024-11-26 17:46:59.913188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:49:59.307 [2024-11-26 17:46:59.913199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:49:59.307 [2024-11-26 17:46:59.913208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.307 [2024-11-26 17:46:59.964663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.307 [2024-11-26 17:46:59.964855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:49:59.307 [2024-11-26 17:46:59.964878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.473 ms 00:49:59.307 [2024-11-26 17:46:59.964890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.307 [2024-11-26 17:46:59.965052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.307 [2024-11-26 17:46:59.965065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:49:59.307 [2024-11-26 17:46:59.965077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:49:59.307 [2024-11-26 17:46:59.965088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.030798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.030840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:49:59.566 [2024-11-26 17:47:00.030855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.791 ms 00:49:59.566 [2024-11-26 17:47:00.030867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.030964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.030978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:49:59.566 [2024-11-26 17:47:00.030990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:49:59.566 [2024-11-26 17:47:00.031001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.031790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.031806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:49:59.566 [2024-11-26 17:47:00.031827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:49:59.566 [2024-11-26 17:47:00.031838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.031980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.031994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:49:59.566 [2024-11-26 17:47:00.032007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:49:59.566 [2024-11-26 17:47:00.032017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.056219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.056257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:49:59.566 [2024-11-26 17:47:00.056272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.216 ms 00:49:59.566 [2024-11-26 17:47:00.056284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.077019] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:49:59.566 [2024-11-26 17:47:00.077057] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:49:59.566 [2024-11-26 17:47:00.077075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.077086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:49:59.566 [2024-11-26 17:47:00.077098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.687 ms 00:49:59.566 [2024-11-26 17:47:00.077108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.108367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.108407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:49:59.566 [2024-11-26 17:47:00.108422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.198 ms 00:49:59.566 [2024-11-26 17:47:00.108434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.127465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.127509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:49:59.566 [2024-11-26 17:47:00.127524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.954 ms 00:49:59.566 [2024-11-26 17:47:00.127535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.146003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.566 [2024-11-26 17:47:00.146038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:49:59.566 [2024-11-26 17:47:00.146052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 00:49:59.566 [2024-11-26 17:47:00.146063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.566 [2024-11-26 17:47:00.146992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.567 [2024-11-26 17:47:00.147024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:49:59.567 [2024-11-26 17:47:00.147037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:49:59.567 [2024-11-26 17:47:00.147048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.567 [2024-11-26 17:47:00.248412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.567 [2024-11-26 17:47:00.248484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:49:59.567 [2024-11-26 17:47:00.248517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.494 ms 00:49:59.567 [2024-11-26 17:47:00.248530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.259546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:49:59.827 [2024-11-26 17:47:00.285578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.285628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:49:59.827 [2024-11-26 17:47:00.285647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.002 ms 00:49:59.827 [2024-11-26 17:47:00.285658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.285823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.285837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:49:59.827 [2024-11-26 17:47:00.285851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:49:59.827 [2024-11-26 17:47:00.285862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.285934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.285947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:49:59.827 [2024-11-26 17:47:00.285958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:49:59.827 [2024-11-26 17:47:00.285969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.286012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.286032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:49:59.827 [2024-11-26 17:47:00.286044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:49:59.827 [2024-11-26 17:47:00.286055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.286099] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:49:59.827 [2024-11-26 17:47:00.286114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.286124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:49:59.827 [2024-11-26 17:47:00.286135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:49:59.827 [2024-11-26 17:47:00.286145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.323823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.323978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:49:59.827 [2024-11-26 17:47:00.324001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.715 ms 00:49:59.827 [2024-11-26 17:47:00.324013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.324192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:49:59.827 [2024-11-26 17:47:00.324209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:49:59.827 [2024-11-26 17:47:00.324221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:49:59.827 [2024-11-26 17:47:00.324232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:49:59.827 [2024-11-26 17:47:00.325611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:49:59.827 [2024-11-26 17:47:00.329835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 461.206 ms, result 0 00:49:59.827 [2024-11-26 17:47:00.330705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:49:59.827 [2024-11-26 17:47:00.349228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:00.765  [2024-11-26T17:47:02.398Z] Copying: 22/256 [MB] (22 MBps) [2024-11-26T17:47:03.777Z] Copying: 44/256 [MB] (22 MBps) [2024-11-26T17:47:04.715Z] Copying: 67/256 [MB] (22 MBps) [2024-11-26T17:47:05.653Z] Copying: 89/256 [MB] (22 MBps) [2024-11-26T17:47:06.592Z] Copying: 111/256 [MB] (21 MBps) [2024-11-26T17:47:07.530Z] Copying: 133/256 [MB] (21 MBps) [2024-11-26T17:47:08.467Z] Copying: 154/256 [MB] (21 MBps) [2024-11-26T17:47:09.404Z] Copying: 176/256 [MB] (21 MBps) [2024-11-26T17:47:10.345Z] Copying: 197/256 [MB] (21 MBps) [2024-11-26T17:47:11.727Z] Copying: 219/256 [MB] (22 MBps) [2024-11-26T17:47:11.987Z] Copying: 241/256 [MB] (22 MBps) [2024-11-26T17:47:11.987Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-26 17:47:11.984239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:11.556 [2024-11-26 17:47:12.001342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.001387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:11.556 [2024-11-26 17:47:12.001404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:11.556 [2024-11-26 17:47:12.001421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.001447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:11.556 [2024-11-26 17:47:12.005671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.005702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:11.556 [2024-11-26 17:47:12.005715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:50:11.556 [2024-11-26 17:47:12.005725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.007773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.007816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:11.556 [2024-11-26 17:47:12.007830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:50:11.556 [2024-11-26 17:47:12.007841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.013490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.013542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:11.556 [2024-11-26 17:47:12.013555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.638 ms 00:50:11.556 [2024-11-26 17:47:12.013566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.019168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.019413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:11.556 [2024-11-26 17:47:12.019435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.557 ms 00:50:11.556 [2024-11-26 17:47:12.019446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.055048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.055085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:11.556 [2024-11-26 17:47:12.055099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.589 ms 00:50:11.556 [2024-11-26 17:47:12.055109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.075925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.075969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:11.556 [2024-11-26 17:47:12.075987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.795 ms 00:50:11.556 [2024-11-26 17:47:12.075998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.076163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.076180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:11.556 [2024-11-26 17:47:12.076192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:50:11.556 [2024-11-26 17:47:12.076212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.111881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.111916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:11.556 [2024-11-26 17:47:12.111930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.708 ms 00:50:11.556 [2024-11-26 17:47:12.111940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.145726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.145912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:11.556 [2024-11-26 17:47:12.145933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.786 ms 00:50:11.556 [2024-11-26 17:47:12.145943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.179333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.179368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:11.556 [2024-11-26 17:47:12.179388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.390 ms 00:50:11.556 [2024-11-26 17:47:12.179397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.213801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.556 [2024-11-26 17:47:12.213836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:11.556 [2024-11-26 17:47:12.213849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.365 ms 00:50:11.556 [2024-11-26 17:47:12.213858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.556 [2024-11-26 17:47:12.213909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:11.556 [2024-11-26 17:47:12.213926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.213993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:11.556 [2024-11-26 17:47:12.214133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.214995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.215005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.215016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.215026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:11.557 [2024-11-26 17:47:12.215043] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:11.557 [2024-11-26 17:47:12.215053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:11.557 [2024-11-26 17:47:12.215063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:11.557 [2024-11-26 17:47:12.215074] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:11.557 [2024-11-26 17:47:12.215082] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:11.557 [2024-11-26 17:47:12.215092] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:11.557 [2024-11-26 17:47:12.215102] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:11.557 [2024-11-26 17:47:12.215113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:11.557 [2024-11-26 17:47:12.215123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:11.557 [2024-11-26 17:47:12.215132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:11.557 [2024-11-26 17:47:12.215141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:11.557 [2024-11-26 17:47:12.215151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.557 [2024-11-26 17:47:12.215165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:11.558 [2024-11-26 17:47:12.215176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:50:11.558 [2024-11-26 17:47:12.215185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.558 [2024-11-26 17:47:12.233377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.558 [2024-11-26 17:47:12.233411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:11.558 [2024-11-26 17:47:12.233423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:50:11.558 [2024-11-26 17:47:12.233432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.558 [2024-11-26 17:47:12.234059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:11.558 [2024-11-26 17:47:12.234093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:11.558 [2024-11-26 17:47:12.234105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:50:11.558 [2024-11-26 17:47:12.234115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.284918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.285115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:11.819 [2024-11-26 17:47:12.285135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.285146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.285245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.285259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:11.819 [2024-11-26 17:47:12.285270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.285279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.285326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.285339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:11.819 [2024-11-26 17:47:12.285359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.285369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.285387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.285402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:11.819 [2024-11-26 17:47:12.285412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.285422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.404341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.404596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:11.819 [2024-11-26 17:47:12.404630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.404641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.498816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.498865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:11.819 [2024-11-26 17:47:12.498878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.498889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.498949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.498961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:11.819 [2024-11-26 17:47:12.498972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.498981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.499021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:11.819 [2024-11-26 17:47:12.499037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.499046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.499169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:11.819 [2024-11-26 17:47:12.499180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.499190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.499236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:11.819 [2024-11-26 17:47:12.499247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.499260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.499319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:11.819 [2024-11-26 17:47:12.499329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.499339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:11.819 [2024-11-26 17:47:12.499402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:11.819 [2024-11-26 17:47:12.499415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:11.819 [2024-11-26 17:47:12.499425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:11.819 [2024-11-26 17:47:12.499608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 499.069 ms, result 0 00:50:13.197 00:50:13.197 00:50:13.197 17:47:13 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78631 00:50:13.197 17:47:13 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:50:13.197 17:47:13 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78631 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78631 ']' 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:13.197 17:47:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:50:13.197 [2024-11-26 17:47:13.804564] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:13.197 [2024-11-26 17:47:13.804897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78631 ] 00:50:13.459 [2024-11-26 17:47:13.990920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:13.459 [2024-11-26 17:47:14.095545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:14.439 17:47:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:14.439 17:47:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:50:14.439 17:47:14 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:50:14.699 [2024-11-26 17:47:15.157048] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:14.700 [2024-11-26 17:47:15.157107] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:14.700 [2024-11-26 17:47:15.335935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.335984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:14.700 [2024-11-26 17:47:15.336004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:50:14.700 [2024-11-26 17:47:15.336015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.339584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.339623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:14.700 [2024-11-26 17:47:15.339637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:50:14.700 [2024-11-26 17:47:15.339647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.339743] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:14.700 [2024-11-26 17:47:15.340764] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:14.700 [2024-11-26 17:47:15.340803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.340814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:14.700 [2024-11-26 17:47:15.340826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:50:14.700 [2024-11-26 17:47:15.340838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.342287] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:14.700 [2024-11-26 17:47:15.361343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.361389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:14.700 [2024-11-26 17:47:15.361403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:50:14.700 [2024-11-26 17:47:15.361417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.361552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.361574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:14.700 [2024-11-26 17:47:15.361586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:50:14.700 [2024-11-26 17:47:15.361601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.368376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.368420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:14.700 [2024-11-26 17:47:15.368432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.707 ms 00:50:14.700 [2024-11-26 17:47:15.368447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.368591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.368612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:14.700 [2024-11-26 17:47:15.368624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:50:14.700 [2024-11-26 17:47:15.368665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.368694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.368720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:14.700 [2024-11-26 17:47:15.368732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:50:14.700 [2024-11-26 17:47:15.368746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.368772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:50:14.700 [2024-11-26 17:47:15.373694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.373727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:14.700 [2024-11-26 17:47:15.373745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:50:14.700 [2024-11-26 17:47:15.373756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.373834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.373847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:14.700 [2024-11-26 17:47:15.373870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:14.700 [2024-11-26 17:47:15.373880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.373907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:14.700 [2024-11-26 17:47:15.373936] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:14.700 [2024-11-26 17:47:15.373987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:14.700 [2024-11-26 17:47:15.374008] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:14.700 [2024-11-26 17:47:15.374103] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:14.700 [2024-11-26 17:47:15.374118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:14.700 [2024-11-26 17:47:15.374144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:14.700 [2024-11-26 17:47:15.374158] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374176] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374188] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:50:14.700 [2024-11-26 17:47:15.374203] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:14.700 [2024-11-26 17:47:15.374214] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:14.700 [2024-11-26 17:47:15.374234] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:14.700 [2024-11-26 17:47:15.374245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.374260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:14.700 [2024-11-26 17:47:15.374270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:50:14.700 [2024-11-26 17:47:15.374291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.374384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.700 [2024-11-26 17:47:15.374401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:14.700 [2024-11-26 17:47:15.374412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:50:14.700 [2024-11-26 17:47:15.374427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.700 [2024-11-26 17:47:15.374560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:14.700 [2024-11-26 17:47:15.374582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:14.700 [2024-11-26 17:47:15.374594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:14.700 [2024-11-26 17:47:15.374648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:14.700 [2024-11-26 17:47:15.374690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:14.700 [2024-11-26 17:47:15.374713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:14.700 [2024-11-26 17:47:15.374727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:50:14.700 [2024-11-26 17:47:15.374736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:14.700 [2024-11-26 17:47:15.374751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:14.700 [2024-11-26 17:47:15.374761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:50:14.700 [2024-11-26 17:47:15.374775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:14.700 [2024-11-26 17:47:15.374798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:14.700 [2024-11-26 17:47:15.374850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:14.700 [2024-11-26 17:47:15.374894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:14.700 [2024-11-26 17:47:15.374924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:14.700 [2024-11-26 17:47:15.374957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:50:14.700 [2024-11-26 17:47:15.374966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:14.700 [2024-11-26 17:47:15.374977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:14.700 [2024-11-26 17:47:15.374986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:50:14.700 [2024-11-26 17:47:15.375000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:14.700 [2024-11-26 17:47:15.375009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:14.700 [2024-11-26 17:47:15.375020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:50:14.700 [2024-11-26 17:47:15.375031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:14.701 [2024-11-26 17:47:15.375043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:14.701 [2024-11-26 17:47:15.375052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:50:14.701 [2024-11-26 17:47:15.375066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.701 [2024-11-26 17:47:15.375075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:14.701 [2024-11-26 17:47:15.375086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:50:14.701 [2024-11-26 17:47:15.375096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.701 [2024-11-26 17:47:15.375107] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:14.701 [2024-11-26 17:47:15.375120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:14.701 [2024-11-26 17:47:15.375132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:14.701 [2024-11-26 17:47:15.375141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:14.701 [2024-11-26 17:47:15.375154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:14.701 [2024-11-26 17:47:15.375164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:14.701 [2024-11-26 17:47:15.375175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:14.701 [2024-11-26 17:47:15.375184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:14.701 [2024-11-26 17:47:15.375195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:14.701 [2024-11-26 17:47:15.375205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:14.701 [2024-11-26 17:47:15.375218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:14.701 [2024-11-26 17:47:15.375230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:50:14.701 [2024-11-26 17:47:15.375257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:50:14.701 [2024-11-26 17:47:15.375272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:50:14.701 [2024-11-26 17:47:15.375283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:50:14.701 [2024-11-26 17:47:15.375296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:50:14.701 [2024-11-26 17:47:15.375306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:50:14.701 [2024-11-26 17:47:15.375319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:50:14.701 [2024-11-26 17:47:15.375329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:50:14.701 [2024-11-26 17:47:15.375341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:50:14.701 [2024-11-26 17:47:15.375352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:50:14.701 [2024-11-26 17:47:15.375423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:14.701 [2024-11-26 17:47:15.375435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:14.701 [2024-11-26 17:47:15.375466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:14.701 [2024-11-26 17:47:15.375482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:14.701 [2024-11-26 17:47:15.375505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:14.701 [2024-11-26 17:47:15.375522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.701 [2024-11-26 17:47:15.375533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:14.701 [2024-11-26 17:47:15.375549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:50:14.701 [2024-11-26 17:47:15.375564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.961 [2024-11-26 17:47:15.417756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.961 [2024-11-26 17:47:15.417965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:14.961 [2024-11-26 17:47:15.418099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.189 ms 00:50:14.961 [2024-11-26 17:47:15.418148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.961 [2024-11-26 17:47:15.418305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.961 [2024-11-26 17:47:15.418549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:14.961 [2024-11-26 17:47:15.418600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:50:14.961 [2024-11-26 17:47:15.418635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.961 [2024-11-26 17:47:15.467132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.961 [2024-11-26 17:47:15.467302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:14.961 [2024-11-26 17:47:15.467392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.511 ms 00:50:14.961 [2024-11-26 17:47:15.467432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.961 [2024-11-26 17:47:15.467563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.961 [2024-11-26 17:47:15.467606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:14.961 [2024-11-26 17:47:15.467771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:14.961 [2024-11-26 17:47:15.467809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.961 [2024-11-26 17:47:15.468269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.961 [2024-11-26 17:47:15.468379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:14.961 [2024-11-26 17:47:15.468453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:50:14.962 [2024-11-26 17:47:15.468489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.468671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.468713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:14.962 [2024-11-26 17:47:15.468815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:50:14.962 [2024-11-26 17:47:15.468852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.490594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.490760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:14.962 [2024-11-26 17:47:15.490865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.723 ms 00:50:14.962 [2024-11-26 17:47:15.490903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.540138] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:50:14.962 [2024-11-26 17:47:15.540315] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:14.962 [2024-11-26 17:47:15.540441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.540481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:14.962 [2024-11-26 17:47:15.540547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.477 ms 00:50:14.962 [2024-11-26 17:47:15.540822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.569983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.570116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:14.962 [2024-11-26 17:47:15.570239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.085 ms 00:50:14.962 [2024-11-26 17:47:15.570278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.587512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.587653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:14.962 [2024-11-26 17:47:15.587735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.124 ms 00:50:14.962 [2024-11-26 17:47:15.587773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.604603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.604746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:14.962 [2024-11-26 17:47:15.604853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.712 ms 00:50:14.962 [2024-11-26 17:47:15.604870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:14.962 [2024-11-26 17:47:15.605612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:14.962 [2024-11-26 17:47:15.605638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:14.962 [2024-11-26 17:47:15.605655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:50:14.962 [2024-11-26 17:47:15.605666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.222 [2024-11-26 17:47:15.689381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.222 [2024-11-26 17:47:15.689440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:15.222 [2024-11-26 17:47:15.689461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.817 ms 00:50:15.222 [2024-11-26 17:47:15.689472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.222 [2024-11-26 17:47:15.699288] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:50:15.222 [2024-11-26 17:47:15.714835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.222 [2024-11-26 17:47:15.714896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:15.222 [2024-11-26 17:47:15.714912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.256 ms 00:50:15.222 [2024-11-26 17:47:15.714926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.222 [2024-11-26 17:47:15.715019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.222 [2024-11-26 17:47:15.715037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:15.222 [2024-11-26 17:47:15.715048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:15.222 [2024-11-26 17:47:15.715063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.222 [2024-11-26 17:47:15.715116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.222 [2024-11-26 17:47:15.715133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:15.222 [2024-11-26 17:47:15.715144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:50:15.222 [2024-11-26 17:47:15.715162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.222 [2024-11-26 17:47:15.715186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.222 [2024-11-26 17:47:15.715202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:15.223 [2024-11-26 17:47:15.715213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:15.223 [2024-11-26 17:47:15.715227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.223 [2024-11-26 17:47:15.715268] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:15.223 [2024-11-26 17:47:15.715290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.223 [2024-11-26 17:47:15.715305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:15.223 [2024-11-26 17:47:15.715319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:15.223 [2024-11-26 17:47:15.715333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.223 [2024-11-26 17:47:15.750024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.223 [2024-11-26 17:47:15.750061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:15.223 [2024-11-26 17:47:15.750079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.714 ms 00:50:15.223 [2024-11-26 17:47:15.750090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.223 [2024-11-26 17:47:15.750204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.223 [2024-11-26 17:47:15.750218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:15.223 [2024-11-26 17:47:15.750240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:50:15.223 [2024-11-26 17:47:15.750250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.223 [2024-11-26 17:47:15.751234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:15.223 [2024-11-26 17:47:15.755353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.623 ms, result 0 00:50:15.223 [2024-11-26 17:47:15.756606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:15.223 Some configs were skipped because the RPC state that can call them passed over. 00:50:15.223 17:47:15 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:50:15.482 [2024-11-26 17:47:16.003884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.482 [2024-11-26 17:47:16.003943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:50:15.482 [2024-11-26 17:47:16.003959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:50:15.482 [2024-11-26 17:47:16.003975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.482 [2024-11-26 17:47:16.004014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.804 ms, result 0 00:50:15.482 true 00:50:15.482 17:47:16 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:50:15.743 [2024-11-26 17:47:16.223550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:15.743 [2024-11-26 17:47:16.223717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:50:15.743 [2024-11-26 17:47:16.223806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:50:15.743 [2024-11-26 17:47:16.223847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:15.743 [2024-11-26 17:47:16.223931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.864 ms, result 0 00:50:15.743 true 00:50:15.743 17:47:16 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78631 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78631 ']' 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78631 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78631 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78631' 00:50:15.743 killing process with pid 78631 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78631 00:50:15.743 17:47:16 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78631 00:50:16.682 [2024-11-26 17:47:17.349027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.682 [2024-11-26 17:47:17.349090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:16.682 [2024-11-26 17:47:17.349106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:16.682 [2024-11-26 17:47:17.349118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.682 [2024-11-26 17:47:17.349144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:16.682 [2024-11-26 17:47:17.352935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.682 [2024-11-26 17:47:17.352974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:16.682 [2024-11-26 17:47:17.352991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.775 ms 00:50:16.682 [2024-11-26 17:47:17.353002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.682 [2024-11-26 17:47:17.353239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.682 [2024-11-26 17:47:17.353253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:16.682 [2024-11-26 17:47:17.353265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:50:16.682 [2024-11-26 17:47:17.353275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.682 [2024-11-26 17:47:17.356531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.682 [2024-11-26 17:47:17.356570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:16.682 [2024-11-26 17:47:17.356584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:50:16.682 [2024-11-26 17:47:17.356595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.682 [2024-11-26 17:47:17.361877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.682 [2024-11-26 17:47:17.361911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:16.682 [2024-11-26 17:47:17.361924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.247 ms 00:50:16.682 [2024-11-26 17:47:17.361934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.376051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.376299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:16.943 [2024-11-26 17:47:17.376326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:50:16.943 [2024-11-26 17:47:17.376337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.387069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.387242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:16.943 [2024-11-26 17:47:17.387268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.638 ms 00:50:16.943 [2024-11-26 17:47:17.387279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.387415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.387430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:16.943 [2024-11-26 17:47:17.387442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:50:16.943 [2024-11-26 17:47:17.387452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.402529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.402563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:16.943 [2024-11-26 17:47:17.402584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.077 ms 00:50:16.943 [2024-11-26 17:47:17.402594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.416584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.416617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:16.943 [2024-11-26 17:47:17.416640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.957 ms 00:50:16.943 [2024-11-26 17:47:17.416649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.430590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.430622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:16.943 [2024-11-26 17:47:17.430641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:50:16.943 [2024-11-26 17:47:17.430650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.444286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.943 [2024-11-26 17:47:17.444319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:16.943 [2024-11-26 17:47:17.444336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.577 ms 00:50:16.943 [2024-11-26 17:47:17.444346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.943 [2024-11-26 17:47:17.444395] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:16.943 [2024-11-26 17:47:17.444412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:16.943 [2024-11-26 17:47:17.444977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.444991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:16.944 [2024-11-26 17:47:17.445738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:16.944 [2024-11-26 17:47:17.445757] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:16.944 [2024-11-26 17:47:17.445774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:16.944 [2024-11-26 17:47:17.445787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:16.944 [2024-11-26 17:47:17.445797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:16.944 [2024-11-26 17:47:17.445811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:16.944 [2024-11-26 17:47:17.445821] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:16.944 [2024-11-26 17:47:17.445837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:16.944 [2024-11-26 17:47:17.445847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:16.944 [2024-11-26 17:47:17.445860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:16.944 [2024-11-26 17:47:17.445869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:16.944 [2024-11-26 17:47:17.445883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.944 [2024-11-26 17:47:17.445893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:16.944 [2024-11-26 17:47:17.445908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:50:16.944 [2024-11-26 17:47:17.445923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.464432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.944 [2024-11-26 17:47:17.464466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:16.944 [2024-11-26 17:47:17.464488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.494 ms 00:50:16.944 [2024-11-26 17:47:17.464514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.465027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:16.944 [2024-11-26 17:47:17.465055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:16.944 [2024-11-26 17:47:17.465077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:50:16.944 [2024-11-26 17:47:17.465087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.531786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:16.944 [2024-11-26 17:47:17.531820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:16.944 [2024-11-26 17:47:17.531835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:16.944 [2024-11-26 17:47:17.531846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.531923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:16.944 [2024-11-26 17:47:17.531935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:16.944 [2024-11-26 17:47:17.531951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:16.944 [2024-11-26 17:47:17.531960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.532009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:16.944 [2024-11-26 17:47:17.532022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:16.944 [2024-11-26 17:47:17.532037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:16.944 [2024-11-26 17:47:17.532047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:16.944 [2024-11-26 17:47:17.532067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:16.944 [2024-11-26 17:47:17.532077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:16.944 [2024-11-26 17:47:17.532089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:16.944 [2024-11-26 17:47:17.532101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.648128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.648363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:17.204 [2024-11-26 17:47:17.648389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.648400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.743899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.743948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:17.204 [2024-11-26 17:47:17.743969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.743980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:17.204 [2024-11-26 17:47:17.744082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:17.204 [2024-11-26 17:47:17.744143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:17.204 [2024-11-26 17:47:17.744294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:17.204 [2024-11-26 17:47:17.744367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:17.204 [2024-11-26 17:47:17.744445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:17.204 [2024-11-26 17:47:17.744536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:17.204 [2024-11-26 17:47:17.744549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:17.204 [2024-11-26 17:47:17.744559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:17.204 [2024-11-26 17:47:17.744693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.290 ms, result 0 00:50:18.143 17:47:18 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:50:18.143 17:47:18 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:18.143 [2024-11-26 17:47:18.819450] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:18.143 [2024-11-26 17:47:18.819614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78695 ] 00:50:18.403 [2024-11-26 17:47:19.005543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:18.675 [2024-11-26 17:47:19.110046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:18.939 [2024-11-26 17:47:19.477556] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:18.940 [2024-11-26 17:47:19.477620] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:19.200 [2024-11-26 17:47:19.638258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.638308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:19.200 [2024-11-26 17:47:19.638324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:19.200 [2024-11-26 17:47:19.638333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.641285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.641325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:19.200 [2024-11-26 17:47:19.641337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:50:19.200 [2024-11-26 17:47:19.641347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.641435] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:19.200 [2024-11-26 17:47:19.642442] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:19.200 [2024-11-26 17:47:19.642469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.642479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:19.200 [2024-11-26 17:47:19.642489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:50:19.200 [2024-11-26 17:47:19.642512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.644108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:19.200 [2024-11-26 17:47:19.663269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.663306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:19.200 [2024-11-26 17:47:19.663320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.192 ms 00:50:19.200 [2024-11-26 17:47:19.663330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.663432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.663446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:19.200 [2024-11-26 17:47:19.663457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:50:19.200 [2024-11-26 17:47:19.663467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.669943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.670159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:19.200 [2024-11-26 17:47:19.670181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.428 ms 00:50:19.200 [2024-11-26 17:47:19.670191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.670302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.670317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:19.200 [2024-11-26 17:47:19.670328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:50:19.200 [2024-11-26 17:47:19.670338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.670369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.670380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:19.200 [2024-11-26 17:47:19.670392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:19.200 [2024-11-26 17:47:19.670402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.670424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:50:19.200 [2024-11-26 17:47:19.674931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.674961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:19.200 [2024-11-26 17:47:19.674972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.520 ms 00:50:19.200 [2024-11-26 17:47:19.674982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.675043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.200 [2024-11-26 17:47:19.675055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:19.200 [2024-11-26 17:47:19.675065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:19.200 [2024-11-26 17:47:19.675074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.200 [2024-11-26 17:47:19.675097] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:19.200 [2024-11-26 17:47:19.675117] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:19.200 [2024-11-26 17:47:19.675151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:19.200 [2024-11-26 17:47:19.675168] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:19.200 [2024-11-26 17:47:19.675250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:19.200 [2024-11-26 17:47:19.675264] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:19.200 [2024-11-26 17:47:19.675277] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:19.201 [2024-11-26 17:47:19.675292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675304] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:50:19.201 [2024-11-26 17:47:19.675325] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:19.201 [2024-11-26 17:47:19.675334] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:19.201 [2024-11-26 17:47:19.675343] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:19.201 [2024-11-26 17:47:19.675353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.201 [2024-11-26 17:47:19.675364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:19.201 [2024-11-26 17:47:19.675374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:50:19.201 [2024-11-26 17:47:19.675393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.201 [2024-11-26 17:47:19.675463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.201 [2024-11-26 17:47:19.675476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:19.201 [2024-11-26 17:47:19.675487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:19.201 [2024-11-26 17:47:19.675512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.201 [2024-11-26 17:47:19.675595] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:19.201 [2024-11-26 17:47:19.675608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:19.201 [2024-11-26 17:47:19.675618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:19.201 [2024-11-26 17:47:19.675647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:19.201 [2024-11-26 17:47:19.675676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:19.201 [2024-11-26 17:47:19.675694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:19.201 [2024-11-26 17:47:19.675713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:50:19.201 [2024-11-26 17:47:19.675722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:19.201 [2024-11-26 17:47:19.675730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:19.201 [2024-11-26 17:47:19.675739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:50:19.201 [2024-11-26 17:47:19.675748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:19.201 [2024-11-26 17:47:19.675766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:19.201 [2024-11-26 17:47:19.675793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:19.201 [2024-11-26 17:47:19.675819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:19.201 [2024-11-26 17:47:19.675843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:19.201 [2024-11-26 17:47:19.675868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:19.201 [2024-11-26 17:47:19.675884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:19.201 [2024-11-26 17:47:19.675892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:19.201 [2024-11-26 17:47:19.675909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:19.201 [2024-11-26 17:47:19.675917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:50:19.201 [2024-11-26 17:47:19.675925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:19.201 [2024-11-26 17:47:19.675934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:19.201 [2024-11-26 17:47:19.675943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:50:19.201 [2024-11-26 17:47:19.675951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:19.201 [2024-11-26 17:47:19.675968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:50:19.201 [2024-11-26 17:47:19.675976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.675984] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:19.201 [2024-11-26 17:47:19.675993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:19.201 [2024-11-26 17:47:19.676005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:19.201 [2024-11-26 17:47:19.676014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:19.201 [2024-11-26 17:47:19.676023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:19.201 [2024-11-26 17:47:19.676033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:19.201 [2024-11-26 17:47:19.676041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:19.201 [2024-11-26 17:47:19.676050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:19.201 [2024-11-26 17:47:19.676059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:19.201 [2024-11-26 17:47:19.676067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:19.201 [2024-11-26 17:47:19.676077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:19.201 [2024-11-26 17:47:19.676089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:50:19.201 [2024-11-26 17:47:19.676111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:50:19.201 [2024-11-26 17:47:19.676121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:50:19.201 [2024-11-26 17:47:19.676131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:50:19.201 [2024-11-26 17:47:19.676141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:50:19.201 [2024-11-26 17:47:19.676150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:50:19.201 [2024-11-26 17:47:19.676159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:50:19.201 [2024-11-26 17:47:19.676168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:50:19.201 [2024-11-26 17:47:19.676178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:50:19.201 [2024-11-26 17:47:19.676187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:50:19.201 [2024-11-26 17:47:19.676234] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:19.201 [2024-11-26 17:47:19.676244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:19.201 [2024-11-26 17:47:19.676264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:19.201 [2024-11-26 17:47:19.676274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:19.201 [2024-11-26 17:47:19.676284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:19.201 [2024-11-26 17:47:19.676294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.201 [2024-11-26 17:47:19.676307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:19.201 [2024-11-26 17:47:19.676317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:50:19.201 [2024-11-26 17:47:19.676334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.201 [2024-11-26 17:47:19.714419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.201 [2024-11-26 17:47:19.714461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:19.201 [2024-11-26 17:47:19.714475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.095 ms 00:50:19.201 [2024-11-26 17:47:19.714486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.201 [2024-11-26 17:47:19.714615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.201 [2024-11-26 17:47:19.714630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:19.202 [2024-11-26 17:47:19.714640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:50:19.202 [2024-11-26 17:47:19.714650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.789067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.789105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:19.202 [2024-11-26 17:47:19.789123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.515 ms 00:50:19.202 [2024-11-26 17:47:19.789133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.789225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.789238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:19.202 [2024-11-26 17:47:19.789249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:19.202 [2024-11-26 17:47:19.789259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.789710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.789725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:19.202 [2024-11-26 17:47:19.789744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:50:19.202 [2024-11-26 17:47:19.789753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.789865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.789879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:19.202 [2024-11-26 17:47:19.789889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:50:19.202 [2024-11-26 17:47:19.789899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.808411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.808445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:19.202 [2024-11-26 17:47:19.808458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.520 ms 00:50:19.202 [2024-11-26 17:47:19.808469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.826036] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:50:19.202 [2024-11-26 17:47:19.826074] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:19.202 [2024-11-26 17:47:19.826088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.826099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:19.202 [2024-11-26 17:47:19.826110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.539 ms 00:50:19.202 [2024-11-26 17:47:19.826120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.853949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.853988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:19.202 [2024-11-26 17:47:19.854002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.797 ms 00:50:19.202 [2024-11-26 17:47:19.854012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.870750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.870785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:19.202 [2024-11-26 17:47:19.870798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.687 ms 00:50:19.202 [2024-11-26 17:47:19.870807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.887601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.887810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:19.202 [2024-11-26 17:47:19.887829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.750 ms 00:50:19.202 [2024-11-26 17:47:19.887838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.202 [2024-11-26 17:47:19.888582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.202 [2024-11-26 17:47:19.888609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:19.202 [2024-11-26 17:47:19.888621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:50:19.202 [2024-11-26 17:47:19.888632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.970974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.971032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:19.462 [2024-11-26 17:47:19.971047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.447 ms 00:50:19.462 [2024-11-26 17:47:19.971058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.981066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:50:19.462 [2024-11-26 17:47:19.996211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.996251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:19.462 [2024-11-26 17:47:19.996265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.110 ms 00:50:19.462 [2024-11-26 17:47:19.996281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.996365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.996378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:19.462 [2024-11-26 17:47:19.996390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:19.462 [2024-11-26 17:47:19.996400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.996450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.996462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:19.462 [2024-11-26 17:47:19.996472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:50:19.462 [2024-11-26 17:47:19.996487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.996543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.996557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:19.462 [2024-11-26 17:47:19.996567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:50:19.462 [2024-11-26 17:47:19.996577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:19.996610] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:19.462 [2024-11-26 17:47:19.996621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:19.996631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:19.462 [2024-11-26 17:47:19.996643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:50:19.462 [2024-11-26 17:47:19.996653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:20.032599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:20.032825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:19.462 [2024-11-26 17:47:20.032847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.979 ms 00:50:19.462 [2024-11-26 17:47:20.032860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:20.032974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:19.462 [2024-11-26 17:47:20.032989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:19.462 [2024-11-26 17:47:20.033000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:50:19.462 [2024-11-26 17:47:20.033010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:19.462 [2024-11-26 17:47:20.033940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:19.462 [2024-11-26 17:47:20.038497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.030 ms, result 0 00:50:19.462 [2024-11-26 17:47:20.039347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:19.462 [2024-11-26 17:47:20.057413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:20.400  [2024-11-26T17:47:22.475Z] Copying: 26/256 [MB] (26 MBps) [2024-11-26T17:47:23.412Z] Copying: 50/256 [MB] (23 MBps) [2024-11-26T17:47:24.350Z] Copying: 75/256 [MB] (25 MBps) [2024-11-26T17:47:25.287Z] Copying: 100/256 [MB] (25 MBps) [2024-11-26T17:47:26.222Z] Copying: 125/256 [MB] (25 MBps) [2024-11-26T17:47:27.160Z] Copying: 151/256 [MB] (25 MBps) [2024-11-26T17:47:28.099Z] Copying: 177/256 [MB] (26 MBps) [2024-11-26T17:47:29.479Z] Copying: 202/256 [MB] (25 MBps) [2024-11-26T17:47:30.047Z] Copying: 228/256 [MB] (25 MBps) [2024-11-26T17:47:30.306Z] Copying: 254/256 [MB] (26 MBps) [2024-11-26T17:47:30.306Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-26 17:47:30.100675] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:29.612 [2024-11-26 17:47:30.114489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.612 [2024-11-26 17:47:30.114536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:29.612 [2024-11-26 17:47:30.114557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:29.612 [2024-11-26 17:47:30.114567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.612 [2024-11-26 17:47:30.114600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:29.612 [2024-11-26 17:47:30.118359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.612 [2024-11-26 17:47:30.118558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:29.612 [2024-11-26 17:47:30.118578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.749 ms 00:50:29.612 [2024-11-26 17:47:30.118588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.612 [2024-11-26 17:47:30.118800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.612 [2024-11-26 17:47:30.118813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:29.612 [2024-11-26 17:47:30.118823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:50:29.612 [2024-11-26 17:47:30.118833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.612 [2024-11-26 17:47:30.121491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.612 [2024-11-26 17:47:30.121521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:29.612 [2024-11-26 17:47:30.121532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:50:29.613 [2024-11-26 17:47:30.121541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.126691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.126721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:29.613 [2024-11-26 17:47:30.126731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:50:29.613 [2024-11-26 17:47:30.126741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.160687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.160845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:29.613 [2024-11-26 17:47:30.160864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.936 ms 00:50:29.613 [2024-11-26 17:47:30.160874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.181832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.181868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:29.613 [2024-11-26 17:47:30.181892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.913 ms 00:50:29.613 [2024-11-26 17:47:30.181902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.182017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.182031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:29.613 [2024-11-26 17:47:30.182056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:29.613 [2024-11-26 17:47:30.182066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.216590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.216625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:29.613 [2024-11-26 17:47:30.216637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.563 ms 00:50:29.613 [2024-11-26 17:47:30.216646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.249996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.250030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:29.613 [2024-11-26 17:47:30.250042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.341 ms 00:50:29.613 [2024-11-26 17:47:30.250053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.613 [2024-11-26 17:47:30.282321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.613 [2024-11-26 17:47:30.282355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:29.613 [2024-11-26 17:47:30.282368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.271 ms 00:50:29.613 [2024-11-26 17:47:30.282377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.873 [2024-11-26 17:47:30.314791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.873 [2024-11-26 17:47:30.314825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:29.873 [2024-11-26 17:47:30.314836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.390 ms 00:50:29.873 [2024-11-26 17:47:30.314845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.873 [2024-11-26 17:47:30.314896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:29.873 [2024-11-26 17:47:30.314911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.314995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:29.873 [2024-11-26 17:47:30.315299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:29.874 [2024-11-26 17:47:30.315987] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:29.874 [2024-11-26 17:47:30.315997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:29.874 [2024-11-26 17:47:30.316007] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:29.874 [2024-11-26 17:47:30.316015] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:29.874 [2024-11-26 17:47:30.316024] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:29.874 [2024-11-26 17:47:30.316033] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:29.874 [2024-11-26 17:47:30.316042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:29.874 [2024-11-26 17:47:30.316051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:29.874 [2024-11-26 17:47:30.316067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:29.874 [2024-11-26 17:47:30.316076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:29.874 [2024-11-26 17:47:30.316084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:29.874 [2024-11-26 17:47:30.316094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.874 [2024-11-26 17:47:30.316103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:29.874 [2024-11-26 17:47:30.316113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:50:29.874 [2024-11-26 17:47:30.316122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.334162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.874 [2024-11-26 17:47:30.334195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:29.874 [2024-11-26 17:47:30.334206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.051 ms 00:50:29.874 [2024-11-26 17:47:30.334216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.334742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:29.874 [2024-11-26 17:47:30.334762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:29.874 [2024-11-26 17:47:30.334773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:50:29.874 [2024-11-26 17:47:30.334783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.385651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:29.874 [2024-11-26 17:47:30.385815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:29.874 [2024-11-26 17:47:30.385834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:29.874 [2024-11-26 17:47:30.385855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.385925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:29.874 [2024-11-26 17:47:30.385937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:29.874 [2024-11-26 17:47:30.385947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:29.874 [2024-11-26 17:47:30.385956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.386001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:29.874 [2024-11-26 17:47:30.386012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:29.874 [2024-11-26 17:47:30.386023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:29.874 [2024-11-26 17:47:30.386032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.874 [2024-11-26 17:47:30.386057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:29.874 [2024-11-26 17:47:30.386068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:29.875 [2024-11-26 17:47:30.386077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:29.875 [2024-11-26 17:47:30.386087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:29.875 [2024-11-26 17:47:30.499811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:29.875 [2024-11-26 17:47:30.500013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:29.875 [2024-11-26 17:47:30.500034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:29.875 [2024-11-26 17:47:30.500045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.595632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.595674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:30.134 [2024-11-26 17:47:30.595704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.595715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.595775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.595786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:30.134 [2024-11-26 17:47:30.595798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.595809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.595838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.595856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:30.134 [2024-11-26 17:47:30.595866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.595876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.595996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.596009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:30.134 [2024-11-26 17:47:30.596019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.596030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.596067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.596079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:30.134 [2024-11-26 17:47:30.596094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.596103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.596141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.596153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:30.134 [2024-11-26 17:47:30.596163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.596173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.596215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:30.134 [2024-11-26 17:47:30.596232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:30.134 [2024-11-26 17:47:30.596242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:30.134 [2024-11-26 17:47:30.596252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:30.134 [2024-11-26 17:47:30.596384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 482.680 ms, result 0 00:50:31.073 00:50:31.073 00:50:31.073 17:47:31 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:50:31.073 17:47:31 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:50:31.642 17:47:32 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:31.642 [2024-11-26 17:47:32.117699] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:31.642 [2024-11-26 17:47:32.117811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78837 ] 00:50:31.642 [2024-11-26 17:47:32.293661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:31.901 [2024-11-26 17:47:32.399269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:32.160 [2024-11-26 17:47:32.754007] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:32.160 [2024-11-26 17:47:32.754075] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:32.421 [2024-11-26 17:47:32.913758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.913807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:32.421 [2024-11-26 17:47:32.913823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:32.421 [2024-11-26 17:47:32.913833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.916726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.916770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:32.421 [2024-11-26 17:47:32.916783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:50:32.421 [2024-11-26 17:47:32.916793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.916882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:32.421 [2024-11-26 17:47:32.917844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:32.421 [2024-11-26 17:47:32.917963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.917976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:32.421 [2024-11-26 17:47:32.917988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:50:32.421 [2024-11-26 17:47:32.917998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.919462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:32.421 [2024-11-26 17:47:32.937805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.937842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:32.421 [2024-11-26 17:47:32.937856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.374 ms 00:50:32.421 [2024-11-26 17:47:32.937866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.937958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.937972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:32.421 [2024-11-26 17:47:32.937984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:50:32.421 [2024-11-26 17:47:32.937993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.944466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.944717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:32.421 [2024-11-26 17:47:32.944738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.444 ms 00:50:32.421 [2024-11-26 17:47:32.944749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.421 [2024-11-26 17:47:32.944853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.421 [2024-11-26 17:47:32.944868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:32.421 [2024-11-26 17:47:32.944879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:50:32.422 [2024-11-26 17:47:32.944889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.944919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.422 [2024-11-26 17:47:32.944930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:32.422 [2024-11-26 17:47:32.944940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:32.422 [2024-11-26 17:47:32.944950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.944973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:50:32.422 [2024-11-26 17:47:32.949422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.422 [2024-11-26 17:47:32.949454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:32.422 [2024-11-26 17:47:32.949466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.462 ms 00:50:32.422 [2024-11-26 17:47:32.949476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.949545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.422 [2024-11-26 17:47:32.949559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:32.422 [2024-11-26 17:47:32.949569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:32.422 [2024-11-26 17:47:32.949579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.949604] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:32.422 [2024-11-26 17:47:32.949625] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:32.422 [2024-11-26 17:47:32.949657] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:32.422 [2024-11-26 17:47:32.949674] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:32.422 [2024-11-26 17:47:32.949758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:32.422 [2024-11-26 17:47:32.949779] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:32.422 [2024-11-26 17:47:32.949791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:32.422 [2024-11-26 17:47:32.949809] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:32.422 [2024-11-26 17:47:32.949821] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:32.422 [2024-11-26 17:47:32.949832] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:50:32.422 [2024-11-26 17:47:32.949841] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:32.422 [2024-11-26 17:47:32.949850] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:32.422 [2024-11-26 17:47:32.949860] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:32.422 [2024-11-26 17:47:32.949870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.422 [2024-11-26 17:47:32.949880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:32.422 [2024-11-26 17:47:32.949890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:50:32.422 [2024-11-26 17:47:32.949899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.949969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.422 [2024-11-26 17:47:32.949984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:32.422 [2024-11-26 17:47:32.949994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:32.422 [2024-11-26 17:47:32.950003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.422 [2024-11-26 17:47:32.950087] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:32.422 [2024-11-26 17:47:32.950100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:32.422 [2024-11-26 17:47:32.950110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:32.422 [2024-11-26 17:47:32.950139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:32.422 [2024-11-26 17:47:32.950168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:32.422 [2024-11-26 17:47:32.950186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:32.422 [2024-11-26 17:47:32.950205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:50:32.422 [2024-11-26 17:47:32.950213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:32.422 [2024-11-26 17:47:32.950222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:32.422 [2024-11-26 17:47:32.950231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:50:32.422 [2024-11-26 17:47:32.950241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:32.422 [2024-11-26 17:47:32.950259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:32.422 [2024-11-26 17:47:32.950287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:32.422 [2024-11-26 17:47:32.950312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:32.422 [2024-11-26 17:47:32.950337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:32.422 [2024-11-26 17:47:32.950363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:32.422 [2024-11-26 17:47:32.950388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:32.422 [2024-11-26 17:47:32.950404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:32.422 [2024-11-26 17:47:32.950413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:50:32.422 [2024-11-26 17:47:32.950421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:32.422 [2024-11-26 17:47:32.950429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:32.422 [2024-11-26 17:47:32.950437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:50:32.422 [2024-11-26 17:47:32.950446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:32.422 [2024-11-26 17:47:32.950465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:50:32.422 [2024-11-26 17:47:32.950474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950482] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:32.422 [2024-11-26 17:47:32.950491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:32.422 [2024-11-26 17:47:32.950519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:32.422 [2024-11-26 17:47:32.950529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:32.422 [2024-11-26 17:47:32.950539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:32.423 [2024-11-26 17:47:32.950548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:32.423 [2024-11-26 17:47:32.950557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:32.423 [2024-11-26 17:47:32.950565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:32.423 [2024-11-26 17:47:32.950574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:32.423 [2024-11-26 17:47:32.950584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:32.423 [2024-11-26 17:47:32.950594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:32.423 [2024-11-26 17:47:32.950606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:50:32.423 [2024-11-26 17:47:32.950628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:50:32.423 [2024-11-26 17:47:32.950637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:50:32.423 [2024-11-26 17:47:32.950647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:50:32.423 [2024-11-26 17:47:32.950656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:50:32.423 [2024-11-26 17:47:32.950666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:50:32.423 [2024-11-26 17:47:32.950675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:50:32.423 [2024-11-26 17:47:32.950684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:50:32.423 [2024-11-26 17:47:32.950694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:50:32.423 [2024-11-26 17:47:32.950703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:50:32.423 [2024-11-26 17:47:32.950751] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:32.423 [2024-11-26 17:47:32.950761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:32.423 [2024-11-26 17:47:32.950781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:32.423 [2024-11-26 17:47:32.950792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:32.423 [2024-11-26 17:47:32.950802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:32.423 [2024-11-26 17:47:32.950812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:32.950825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:32.423 [2024-11-26 17:47:32.950834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:50:32.423 [2024-11-26 17:47:32.950844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:32.989947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:32.990112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:32.423 [2024-11-26 17:47:32.990247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.115 ms 00:50:32.423 [2024-11-26 17:47:32.990285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:32.990425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:32.990676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:32.423 [2024-11-26 17:47:32.990714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:32.423 [2024-11-26 17:47:32.990744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.066685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.066860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:32.423 [2024-11-26 17:47:33.067050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.016 ms 00:50:32.423 [2024-11-26 17:47:33.067090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.067205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.067307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:32.423 [2024-11-26 17:47:33.067344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:32.423 [2024-11-26 17:47:33.067375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.067897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.068010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:32.423 [2024-11-26 17:47:33.068086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:50:32.423 [2024-11-26 17:47:33.068120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.068266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.068421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:32.423 [2024-11-26 17:47:33.068460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:50:32.423 [2024-11-26 17:47:33.068490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.087593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.087729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:32.423 [2024-11-26 17:47:33.087842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.067 ms 00:50:32.423 [2024-11-26 17:47:33.087878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.423 [2024-11-26 17:47:33.106707] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:50:32.423 [2024-11-26 17:47:33.106862] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:32.423 [2024-11-26 17:47:33.106973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.423 [2024-11-26 17:47:33.107004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:32.423 [2024-11-26 17:47:33.107032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.009 ms 00:50:32.423 [2024-11-26 17:47:33.107061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.683 [2024-11-26 17:47:33.135992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.683 [2024-11-26 17:47:33.136146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:32.683 [2024-11-26 17:47:33.136242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.884 ms 00:50:32.683 [2024-11-26 17:47:33.136279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.683 [2024-11-26 17:47:33.153302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.683 [2024-11-26 17:47:33.153449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:32.683 [2024-11-26 17:47:33.153543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.928 ms 00:50:32.683 [2024-11-26 17:47:33.153579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.683 [2024-11-26 17:47:33.170090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.683 [2024-11-26 17:47:33.170237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:32.683 [2024-11-26 17:47:33.170336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.391 ms 00:50:32.683 [2024-11-26 17:47:33.170372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.683 [2024-11-26 17:47:33.171175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.683 [2024-11-26 17:47:33.171311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:32.683 [2024-11-26 17:47:33.171393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:50:32.684 [2024-11-26 17:47:33.171408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.257968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.258146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:32.684 [2024-11-26 17:47:33.258225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.651 ms 00:50:32.684 [2024-11-26 17:47:33.258259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.268396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:50:32.684 [2024-11-26 17:47:33.283883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.283928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:32.684 [2024-11-26 17:47:33.283943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:50:32.684 [2024-11-26 17:47:33.283966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.284075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.284090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:32.684 [2024-11-26 17:47:33.284101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:50:32.684 [2024-11-26 17:47:33.284111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.284159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.284170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:32.684 [2024-11-26 17:47:33.284180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:50:32.684 [2024-11-26 17:47:33.284198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.284235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.284248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:32.684 [2024-11-26 17:47:33.284257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:32.684 [2024-11-26 17:47:33.284267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.284307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:32.684 [2024-11-26 17:47:33.284319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.284329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:32.684 [2024-11-26 17:47:33.284339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:50:32.684 [2024-11-26 17:47:33.284348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.317892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.317930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:32.684 [2024-11-26 17:47:33.317945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.578 ms 00:50:32.684 [2024-11-26 17:47:33.317955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.318066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.684 [2024-11-26 17:47:33.318080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:32.684 [2024-11-26 17:47:33.318090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:50:32.684 [2024-11-26 17:47:33.318099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.684 [2024-11-26 17:47:33.319023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:32.684 [2024-11-26 17:47:33.322922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.591 ms, result 0 00:50:32.684 [2024-11-26 17:47:33.323715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:32.684 [2024-11-26 17:47:33.340930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:32.944  [2024-11-26T17:47:33.638Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-26 17:47:33.511867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:32.944 [2024-11-26 17:47:33.524853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.524889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:32.944 [2024-11-26 17:47:33.524907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:50:32.944 [2024-11-26 17:47:33.524917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.524938] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:32.944 [2024-11-26 17:47:33.528735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.528764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:32.944 [2024-11-26 17:47:33.528776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:50:32.944 [2024-11-26 17:47:33.528785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.530840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.530981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:32.944 [2024-11-26 17:47:33.531001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:50:32.944 [2024-11-26 17:47:33.531010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.534135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.534280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:32.944 [2024-11-26 17:47:33.534300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:50:32.944 [2024-11-26 17:47:33.534310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.539565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.539597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:32.944 [2024-11-26 17:47:33.539608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:50:32.944 [2024-11-26 17:47:33.539618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.573352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.573389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:32.944 [2024-11-26 17:47:33.573402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.724 ms 00:50:32.944 [2024-11-26 17:47:33.573412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.593509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.593563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:32.944 [2024-11-26 17:47:33.593576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.064 ms 00:50:32.944 [2024-11-26 17:47:33.593586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.593715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.593729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:32.944 [2024-11-26 17:47:33.593754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:50:32.944 [2024-11-26 17:47:33.593763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:32.944 [2024-11-26 17:47:33.627480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:32.944 [2024-11-26 17:47:33.627529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:32.944 [2024-11-26 17:47:33.627541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.754 ms 00:50:32.944 [2024-11-26 17:47:33.627550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.205 [2024-11-26 17:47:33.660903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.205 [2024-11-26 17:47:33.660939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:33.205 [2024-11-26 17:47:33.660951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.356 ms 00:50:33.205 [2024-11-26 17:47:33.660960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.205 [2024-11-26 17:47:33.694817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.205 [2024-11-26 17:47:33.694986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:33.205 [2024-11-26 17:47:33.695006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.861 ms 00:50:33.205 [2024-11-26 17:47:33.695016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.205 [2024-11-26 17:47:33.728587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.205 [2024-11-26 17:47:33.728623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:33.205 [2024-11-26 17:47:33.728635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.522 ms 00:50:33.205 [2024-11-26 17:47:33.728644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.205 [2024-11-26 17:47:33.728695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:33.205 [2024-11-26 17:47:33.728709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.728990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:33.205 [2024-11-26 17:47:33.729469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:33.206 [2024-11-26 17:47:33.729774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:33.206 [2024-11-26 17:47:33.729784] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:33.206 [2024-11-26 17:47:33.729793] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:33.206 [2024-11-26 17:47:33.729802] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:33.206 [2024-11-26 17:47:33.729811] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:33.206 [2024-11-26 17:47:33.729820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:33.206 [2024-11-26 17:47:33.729829] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:33.206 [2024-11-26 17:47:33.729839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:33.206 [2024-11-26 17:47:33.729855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:33.206 [2024-11-26 17:47:33.729864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:33.206 [2024-11-26 17:47:33.729872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:33.206 [2024-11-26 17:47:33.729882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.206 [2024-11-26 17:47:33.729891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:33.206 [2024-11-26 17:47:33.729901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:50:33.206 [2024-11-26 17:47:33.729910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.748026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.206 [2024-11-26 17:47:33.748059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:33.206 [2024-11-26 17:47:33.748071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.127 ms 00:50:33.206 [2024-11-26 17:47:33.748080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.748604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:33.206 [2024-11-26 17:47:33.748617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:33.206 [2024-11-26 17:47:33.748627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:50:33.206 [2024-11-26 17:47:33.748637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.801743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.206 [2024-11-26 17:47:33.801777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:33.206 [2024-11-26 17:47:33.801789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.206 [2024-11-26 17:47:33.801807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.801881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.206 [2024-11-26 17:47:33.801892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:33.206 [2024-11-26 17:47:33.801902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.206 [2024-11-26 17:47:33.801912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.801954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.206 [2024-11-26 17:47:33.801966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:33.206 [2024-11-26 17:47:33.801975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.206 [2024-11-26 17:47:33.801985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.206 [2024-11-26 17:47:33.802010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.206 [2024-11-26 17:47:33.802020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:33.206 [2024-11-26 17:47:33.802030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.206 [2024-11-26 17:47:33.802039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:33.916829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:33.916880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:33.466 [2024-11-26 17:47:33.916894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:33.916913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:33.466 [2024-11-26 17:47:34.010238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:33.466 [2024-11-26 17:47:34.010325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:33.466 [2024-11-26 17:47:34.010387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:33.466 [2024-11-26 17:47:34.010540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:33.466 [2024-11-26 17:47:34.010613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:33.466 [2024-11-26 17:47:34.010680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:33.466 [2024-11-26 17:47:34.010747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:33.466 [2024-11-26 17:47:34.010757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:33.466 [2024-11-26 17:47:34.010766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:33.466 [2024-11-26 17:47:34.010895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.817 ms, result 0 00:50:34.496 00:50:34.496 00:50:34.496 17:47:35 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78869 00:50:34.496 17:47:35 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:50:34.496 17:47:35 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78869 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78869 ']' 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:34.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:34.496 17:47:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:50:34.496 [2024-11-26 17:47:35.170730] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:34.496 [2024-11-26 17:47:35.170882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78869 ] 00:50:34.755 [2024-11-26 17:47:35.358770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:35.014 [2024-11-26 17:47:35.461485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:35.975 17:47:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:35.975 17:47:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:50:35.975 17:47:36 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:50:35.975 [2024-11-26 17:47:36.524381] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:35.975 [2024-11-26 17:47:36.524444] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:36.234 [2024-11-26 17:47:36.707429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.707480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:36.234 [2024-11-26 17:47:36.707512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:36.234 [2024-11-26 17:47:36.707524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.710822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.711039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:36.234 [2024-11-26 17:47:36.711064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:50:36.234 [2024-11-26 17:47:36.711074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.711183] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:36.234 [2024-11-26 17:47:36.712175] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:36.234 [2024-11-26 17:47:36.712226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.712237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:36.234 [2024-11-26 17:47:36.712250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:50:36.234 [2024-11-26 17:47:36.712261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.713720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:36.234 [2024-11-26 17:47:36.731064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.731112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:36.234 [2024-11-26 17:47:36.731126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.376 ms 00:50:36.234 [2024-11-26 17:47:36.731141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.731232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.731249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:36.234 [2024-11-26 17:47:36.731260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:50:36.234 [2024-11-26 17:47:36.731272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.737821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.738047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:36.234 [2024-11-26 17:47:36.738067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.514 ms 00:50:36.234 [2024-11-26 17:47:36.738081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.738192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.738210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:36.234 [2024-11-26 17:47:36.738221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:50:36.234 [2024-11-26 17:47:36.738237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.738262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.738276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:36.234 [2024-11-26 17:47:36.738286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:36.234 [2024-11-26 17:47:36.738299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.738323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:50:36.234 [2024-11-26 17:47:36.742884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.742917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:36.234 [2024-11-26 17:47:36.742934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:50:36.234 [2024-11-26 17:47:36.742944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.234 [2024-11-26 17:47:36.743014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.234 [2024-11-26 17:47:36.743027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:36.234 [2024-11-26 17:47:36.743047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:36.234 [2024-11-26 17:47:36.743057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.235 [2024-11-26 17:47:36.743083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:36.235 [2024-11-26 17:47:36.743105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:36.235 [2024-11-26 17:47:36.743154] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:36.235 [2024-11-26 17:47:36.743173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:36.235 [2024-11-26 17:47:36.743261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:36.235 [2024-11-26 17:47:36.743275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:36.235 [2024-11-26 17:47:36.743301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:36.235 [2024-11-26 17:47:36.743315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743333] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743345] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:50:36.235 [2024-11-26 17:47:36.743360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:36.235 [2024-11-26 17:47:36.743369] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:36.235 [2024-11-26 17:47:36.743397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:36.235 [2024-11-26 17:47:36.743408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.235 [2024-11-26 17:47:36.743424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:36.235 [2024-11-26 17:47:36.743434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:50:36.235 [2024-11-26 17:47:36.743453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.235 [2024-11-26 17:47:36.743535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.235 [2024-11-26 17:47:36.743558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:36.235 [2024-11-26 17:47:36.743568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:50:36.235 [2024-11-26 17:47:36.743582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.235 [2024-11-26 17:47:36.743664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:36.235 [2024-11-26 17:47:36.743681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:36.235 [2024-11-26 17:47:36.743691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:36.235 [2024-11-26 17:47:36.743729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:36.235 [2024-11-26 17:47:36.743769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:36.235 [2024-11-26 17:47:36.743791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:36.235 [2024-11-26 17:47:36.743805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:50:36.235 [2024-11-26 17:47:36.743816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:36.235 [2024-11-26 17:47:36.743828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:36.235 [2024-11-26 17:47:36.743837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:50:36.235 [2024-11-26 17:47:36.743849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:36.235 [2024-11-26 17:47:36.743869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:36.235 [2024-11-26 17:47:36.743907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:36.235 [2024-11-26 17:47:36.743940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:36.235 [2024-11-26 17:47:36.743968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:50:36.235 [2024-11-26 17:47:36.743979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:36.235 [2024-11-26 17:47:36.743987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:36.235 [2024-11-26 17:47:36.743998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:50:36.235 [2024-11-26 17:47:36.744006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:36.235 [2024-11-26 17:47:36.744017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:36.235 [2024-11-26 17:47:36.744026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:50:36.235 [2024-11-26 17:47:36.744039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:36.235 [2024-11-26 17:47:36.744048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:36.235 [2024-11-26 17:47:36.744059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:50:36.235 [2024-11-26 17:47:36.744068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:36.235 [2024-11-26 17:47:36.744078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:36.235 [2024-11-26 17:47:36.744087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:50:36.235 [2024-11-26 17:47:36.744099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.744108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:36.235 [2024-11-26 17:47:36.744119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:50:36.235 [2024-11-26 17:47:36.744128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.744139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:36.235 [2024-11-26 17:47:36.744152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:36.235 [2024-11-26 17:47:36.744167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:36.235 [2024-11-26 17:47:36.744176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:36.235 [2024-11-26 17:47:36.744187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:36.235 [2024-11-26 17:47:36.744196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:36.235 [2024-11-26 17:47:36.744207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:36.235 [2024-11-26 17:47:36.744215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:36.235 [2024-11-26 17:47:36.744226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:36.235 [2024-11-26 17:47:36.744234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:36.235 [2024-11-26 17:47:36.744247] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:36.235 [2024-11-26 17:47:36.744260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:36.235 [2024-11-26 17:47:36.744275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:50:36.235 [2024-11-26 17:47:36.744285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:50:36.235 [2024-11-26 17:47:36.744298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:50:36.235 [2024-11-26 17:47:36.744308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:50:36.235 [2024-11-26 17:47:36.744320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:50:36.235 [2024-11-26 17:47:36.744329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:50:36.235 [2024-11-26 17:47:36.744341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:50:36.236 [2024-11-26 17:47:36.744351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:50:36.236 [2024-11-26 17:47:36.744363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:50:36.236 [2024-11-26 17:47:36.744373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:50:36.236 [2024-11-26 17:47:36.744426] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:36.236 [2024-11-26 17:47:36.744436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:36.236 [2024-11-26 17:47:36.744461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:36.236 [2024-11-26 17:47:36.744472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:36.236 [2024-11-26 17:47:36.744482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:36.236 [2024-11-26 17:47:36.744520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.744531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:36.236 [2024-11-26 17:47:36.744543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:50:36.236 [2024-11-26 17:47:36.744555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.783427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.783462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:36.236 [2024-11-26 17:47:36.783477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.879 ms 00:50:36.236 [2024-11-26 17:47:36.783490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.783608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.783621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:36.236 [2024-11-26 17:47:36.783634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:50:36.236 [2024-11-26 17:47:36.783644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.830170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.830211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:36.236 [2024-11-26 17:47:36.830229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.571 ms 00:50:36.236 [2024-11-26 17:47:36.830239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.830319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.830331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:36.236 [2024-11-26 17:47:36.830345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:36.236 [2024-11-26 17:47:36.830354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.830796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.830820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:36.236 [2024-11-26 17:47:36.830833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:50:36.236 [2024-11-26 17:47:36.830842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.830956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.830969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:36.236 [2024-11-26 17:47:36.830982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:50:36.236 [2024-11-26 17:47:36.830992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.851918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.851953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:36.236 [2024-11-26 17:47:36.851968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.935 ms 00:50:36.236 [2024-11-26 17:47:36.851978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.236 [2024-11-26 17:47:36.901268] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:36.236 [2024-11-26 17:47:36.901320] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:36.236 [2024-11-26 17:47:36.901349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.236 [2024-11-26 17:47:36.901364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:36.236 [2024-11-26 17:47:36.901382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.346 ms 00:50:36.236 [2024-11-26 17:47:36.901407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.494 [2024-11-26 17:47:36.929510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.494 [2024-11-26 17:47:36.929561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:36.494 [2024-11-26 17:47:36.929578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.014 ms 00:50:36.494 [2024-11-26 17:47:36.929588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.494 [2024-11-26 17:47:36.946342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.494 [2024-11-26 17:47:36.946378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:36.494 [2024-11-26 17:47:36.946396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.699 ms 00:50:36.494 [2024-11-26 17:47:36.946405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.494 [2024-11-26 17:47:36.962915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:36.962949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:36.495 [2024-11-26 17:47:36.962964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.461 ms 00:50:36.495 [2024-11-26 17:47:36.962973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:36.963732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:36.963766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:36.495 [2024-11-26 17:47:36.963780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:50:36.495 [2024-11-26 17:47:36.963790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.050516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.050566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:36.495 [2024-11-26 17:47:37.050585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.836 ms 00:50:36.495 [2024-11-26 17:47:37.050596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.060436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:50:36.495 [2024-11-26 17:47:37.075797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.075849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:36.495 [2024-11-26 17:47:37.075864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.140 ms 00:50:36.495 [2024-11-26 17:47:37.075876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.075957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.075973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:36.495 [2024-11-26 17:47:37.075984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:36.495 [2024-11-26 17:47:37.075996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.076047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.076061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:36.495 [2024-11-26 17:47:37.076071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:50:36.495 [2024-11-26 17:47:37.076086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.076109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.076121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:36.495 [2024-11-26 17:47:37.076131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:50:36.495 [2024-11-26 17:47:37.076143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.076179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:36.495 [2024-11-26 17:47:37.076196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.076209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:36.495 [2024-11-26 17:47:37.076221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:50:36.495 [2024-11-26 17:47:37.076233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.110608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.110649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:36.495 [2024-11-26 17:47:37.110669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.394 ms 00:50:36.495 [2024-11-26 17:47:37.110680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.110794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.495 [2024-11-26 17:47:37.110808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:36.495 [2024-11-26 17:47:37.110829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:50:36.495 [2024-11-26 17:47:37.110839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.495 [2024-11-26 17:47:37.111813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:36.495 [2024-11-26 17:47:37.115611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.729 ms, result 0 00:50:36.495 [2024-11-26 17:47:37.116707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:36.495 Some configs were skipped because the RPC state that can call them passed over. 00:50:36.495 17:47:37 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:50:36.753 [2024-11-26 17:47:37.371131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:36.753 [2024-11-26 17:47:37.371347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:50:36.753 [2024-11-26 17:47:37.371441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:50:36.753 [2024-11-26 17:47:37.371484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:36.753 [2024-11-26 17:47:37.371567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.932 ms, result 0 00:50:36.753 true 00:50:36.753 17:47:37 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:50:37.012 [2024-11-26 17:47:37.586722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:37.012 [2024-11-26 17:47:37.586760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:50:37.012 [2024-11-26 17:47:37.586775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:50:37.012 [2024-11-26 17:47:37.586785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:37.012 [2024-11-26 17:47:37.586821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.185 ms, result 0 00:50:37.012 true 00:50:37.012 17:47:37 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78869 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78869 ']' 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78869 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78869 00:50:37.012 killing process with pid 78869 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78869' 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78869 00:50:37.012 17:47:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78869 00:50:38.393 [2024-11-26 17:47:38.702911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.702973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:38.393 [2024-11-26 17:47:38.702988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:38.393 [2024-11-26 17:47:38.702999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.703024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:38.393 [2024-11-26 17:47:38.706851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.706892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:38.393 [2024-11-26 17:47:38.706908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.810 ms 00:50:38.393 [2024-11-26 17:47:38.706919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.707154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.707168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:38.393 [2024-11-26 17:47:38.707180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:50:38.393 [2024-11-26 17:47:38.707191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.710313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.710592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:38.393 [2024-11-26 17:47:38.710620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:50:38.393 [2024-11-26 17:47:38.710633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.715976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.716012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:38.393 [2024-11-26 17:47:38.716025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:50:38.393 [2024-11-26 17:47:38.716035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.729926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.730108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:38.393 [2024-11-26 17:47:38.730135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:50:38.393 [2024-11-26 17:47:38.730146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.741021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.741190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:38.393 [2024-11-26 17:47:38.741216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.797 ms 00:50:38.393 [2024-11-26 17:47:38.741227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.741389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.741405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:38.393 [2024-11-26 17:47:38.741418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:50:38.393 [2024-11-26 17:47:38.741428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.756209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.756375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:38.393 [2024-11-26 17:47:38.756399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.783 ms 00:50:38.393 [2024-11-26 17:47:38.756409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.770831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.770976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:38.393 [2024-11-26 17:47:38.771019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.391 ms 00:50:38.393 [2024-11-26 17:47:38.771029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.784703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.784738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:38.393 [2024-11-26 17:47:38.784753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.614 ms 00:50:38.393 [2024-11-26 17:47:38.784762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.797935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.393 [2024-11-26 17:47:38.798097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:38.393 [2024-11-26 17:47:38.798121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.133 ms 00:50:38.393 [2024-11-26 17:47:38.798131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.393 [2024-11-26 17:47:38.798198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:38.393 [2024-11-26 17:47:38.798214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:38.393 [2024-11-26 17:47:38.798650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.798994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:38.394 [2024-11-26 17:47:38.799470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:38.394 [2024-11-26 17:47:38.799484] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:38.394 [2024-11-26 17:47:38.799513] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:38.394 [2024-11-26 17:47:38.799526] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:38.394 [2024-11-26 17:47:38.799537] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:38.394 [2024-11-26 17:47:38.799549] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:38.394 [2024-11-26 17:47:38.799558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:38.394 [2024-11-26 17:47:38.799570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:38.394 [2024-11-26 17:47:38.799579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:38.394 [2024-11-26 17:47:38.799590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:38.394 [2024-11-26 17:47:38.799599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:38.394 [2024-11-26 17:47:38.799610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.394 [2024-11-26 17:47:38.799620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:38.394 [2024-11-26 17:47:38.799632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:50:38.394 [2024-11-26 17:47:38.799645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.394 [2024-11-26 17:47:38.818611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.394 [2024-11-26 17:47:38.818643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:38.394 [2024-11-26 17:47:38.818660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.972 ms 00:50:38.394 [2024-11-26 17:47:38.818670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.394 [2024-11-26 17:47:38.819218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:38.394 [2024-11-26 17:47:38.819241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:38.394 [2024-11-26 17:47:38.819256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:50:38.394 [2024-11-26 17:47:38.819265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.394 [2024-11-26 17:47:38.884005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.394 [2024-11-26 17:47:38.884040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:38.394 [2024-11-26 17:47:38.884055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.394 [2024-11-26 17:47:38.884065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.394 [2024-11-26 17:47:38.884141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.394 [2024-11-26 17:47:38.884152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:38.395 [2024-11-26 17:47:38.884167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.395 [2024-11-26 17:47:38.884177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.395 [2024-11-26 17:47:38.884227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.395 [2024-11-26 17:47:38.884240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:38.395 [2024-11-26 17:47:38.884255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.395 [2024-11-26 17:47:38.884265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.395 [2024-11-26 17:47:38.884285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.395 [2024-11-26 17:47:38.884295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:38.395 [2024-11-26 17:47:38.884306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.395 [2024-11-26 17:47:38.884318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.395 [2024-11-26 17:47:39.000946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.395 [2024-11-26 17:47:39.001009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:38.395 [2024-11-26 17:47:39.001027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.395 [2024-11-26 17:47:39.001037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.653 [2024-11-26 17:47:39.096206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.653 [2024-11-26 17:47:39.096463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:38.653 [2024-11-26 17:47:39.096508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.653 [2024-11-26 17:47:39.096519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.653 [2024-11-26 17:47:39.096596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.653 [2024-11-26 17:47:39.096608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:38.653 [2024-11-26 17:47:39.096625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.653 [2024-11-26 17:47:39.096635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.653 [2024-11-26 17:47:39.096667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.653 [2024-11-26 17:47:39.096679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:38.653 [2024-11-26 17:47:39.096692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.653 [2024-11-26 17:47:39.096701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.653 [2024-11-26 17:47:39.096821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.653 [2024-11-26 17:47:39.096834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:38.653 [2024-11-26 17:47:39.096847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.653 [2024-11-26 17:47:39.096857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.653 [2024-11-26 17:47:39.096907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.653 [2024-11-26 17:47:39.096918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:38.654 [2024-11-26 17:47:39.096930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.654 [2024-11-26 17:47:39.096940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.654 [2024-11-26 17:47:39.096982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.654 [2024-11-26 17:47:39.096993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:38.654 [2024-11-26 17:47:39.097007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.654 [2024-11-26 17:47:39.097017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.654 [2024-11-26 17:47:39.097061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:38.654 [2024-11-26 17:47:39.097073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:38.654 [2024-11-26 17:47:39.097085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:38.654 [2024-11-26 17:47:39.097094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:38.654 [2024-11-26 17:47:39.097222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.930 ms, result 0 00:50:39.591 17:47:40 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:39.591 [2024-11-26 17:47:40.173379] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:39.591 [2024-11-26 17:47:40.173745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78934 ] 00:50:39.850 [2024-11-26 17:47:40.358855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:39.850 [2024-11-26 17:47:40.477364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:40.421 [2024-11-26 17:47:40.836934] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:40.421 [2024-11-26 17:47:40.837006] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:50:40.421 [2024-11-26 17:47:40.997468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.421 [2024-11-26 17:47:40.997525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:40.421 [2024-11-26 17:47:40.997540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:40.421 [2024-11-26 17:47:40.997550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.421 [2024-11-26 17:47:41.000571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.421 [2024-11-26 17:47:41.000608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:40.421 [2024-11-26 17:47:41.000620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:50:40.421 [2024-11-26 17:47:41.000630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.421 [2024-11-26 17:47:41.000719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:40.421 [2024-11-26 17:47:41.001828] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:40.422 [2024-11-26 17:47:41.001864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.001884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:40.422 [2024-11-26 17:47:41.001894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:50:40.422 [2024-11-26 17:47:41.001905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.003315] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:50:40.422 [2024-11-26 17:47:41.021674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.021711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:50:40.422 [2024-11-26 17:47:41.021725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.391 ms 00:50:40.422 [2024-11-26 17:47:41.021735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.021827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.021842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:50:40.422 [2024-11-26 17:47:41.021852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:50:40.422 [2024-11-26 17:47:41.021862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.028322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.028575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:40.422 [2024-11-26 17:47:41.028597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.432 ms 00:50:40.422 [2024-11-26 17:47:41.028609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.028711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.028725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:40.422 [2024-11-26 17:47:41.028736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:50:40.422 [2024-11-26 17:47:41.028747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.028778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.028789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:40.422 [2024-11-26 17:47:41.028800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:40.422 [2024-11-26 17:47:41.028810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.028832] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:50:40.422 [2024-11-26 17:47:41.033325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.033357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:40.422 [2024-11-26 17:47:41.033368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.505 ms 00:50:40.422 [2024-11-26 17:47:41.033378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.033441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.033454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:40.422 [2024-11-26 17:47:41.033464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:50:40.422 [2024-11-26 17:47:41.033473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.033508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:50:40.422 [2024-11-26 17:47:41.033529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:50:40.422 [2024-11-26 17:47:41.033562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:50:40.422 [2024-11-26 17:47:41.033582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:50:40.422 [2024-11-26 17:47:41.033664] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:40.422 [2024-11-26 17:47:41.033678] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:40.422 [2024-11-26 17:47:41.033691] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:40.422 [2024-11-26 17:47:41.033708] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:40.422 [2024-11-26 17:47:41.033721] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:40.422 [2024-11-26 17:47:41.033732] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:50:40.422 [2024-11-26 17:47:41.033742] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:40.422 [2024-11-26 17:47:41.033751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:40.422 [2024-11-26 17:47:41.033760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:40.422 [2024-11-26 17:47:41.033771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.033781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:40.422 [2024-11-26 17:47:41.033791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:50:40.422 [2024-11-26 17:47:41.033802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.033873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.422 [2024-11-26 17:47:41.033888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:40.422 [2024-11-26 17:47:41.033898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:50:40.422 [2024-11-26 17:47:41.033907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.422 [2024-11-26 17:47:41.033993] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:40.422 [2024-11-26 17:47:41.034012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:40.422 [2024-11-26 17:47:41.034022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:40.422 [2024-11-26 17:47:41.034052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:40.422 [2024-11-26 17:47:41.034081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:40.422 [2024-11-26 17:47:41.034102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:40.422 [2024-11-26 17:47:41.034120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:50:40.422 [2024-11-26 17:47:41.034129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:40.422 [2024-11-26 17:47:41.034139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:40.422 [2024-11-26 17:47:41.034148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:50:40.422 [2024-11-26 17:47:41.034157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:40.422 [2024-11-26 17:47:41.034174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:40.422 [2024-11-26 17:47:41.034201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:40.422 [2024-11-26 17:47:41.034227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:40.422 [2024-11-26 17:47:41.034253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:50:40.422 [2024-11-26 17:47:41.034261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:40.422 [2024-11-26 17:47:41.034270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:40.422 [2024-11-26 17:47:41.034279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:50:40.423 [2024-11-26 17:47:41.034287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:40.423 [2024-11-26 17:47:41.034297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:40.423 [2024-11-26 17:47:41.034305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:50:40.423 [2024-11-26 17:47:41.034313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:40.423 [2024-11-26 17:47:41.034321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:40.423 [2024-11-26 17:47:41.034330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:50:40.423 [2024-11-26 17:47:41.034340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:40.423 [2024-11-26 17:47:41.034348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:40.423 [2024-11-26 17:47:41.034357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:50:40.423 [2024-11-26 17:47:41.034365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.423 [2024-11-26 17:47:41.034374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:40.423 [2024-11-26 17:47:41.034382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:50:40.423 [2024-11-26 17:47:41.034392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.423 [2024-11-26 17:47:41.034400] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:40.423 [2024-11-26 17:47:41.034410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:40.423 [2024-11-26 17:47:41.034423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:40.423 [2024-11-26 17:47:41.034433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:40.423 [2024-11-26 17:47:41.034442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:40.423 [2024-11-26 17:47:41.034451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:40.423 [2024-11-26 17:47:41.034459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:40.423 [2024-11-26 17:47:41.034468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:40.423 [2024-11-26 17:47:41.034477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:40.423 [2024-11-26 17:47:41.034486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:40.423 [2024-11-26 17:47:41.034508] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:40.423 [2024-11-26 17:47:41.034520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:50:40.423 [2024-11-26 17:47:41.034543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:50:40.423 [2024-11-26 17:47:41.034553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:50:40.423 [2024-11-26 17:47:41.034563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:50:40.423 [2024-11-26 17:47:41.034574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:50:40.423 [2024-11-26 17:47:41.034585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:50:40.423 [2024-11-26 17:47:41.034595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:50:40.423 [2024-11-26 17:47:41.034605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:50:40.423 [2024-11-26 17:47:41.034615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:50:40.423 [2024-11-26 17:47:41.034624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:50:40.423 [2024-11-26 17:47:41.034672] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:40.423 [2024-11-26 17:47:41.034682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:40.423 [2024-11-26 17:47:41.034702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:40.423 [2024-11-26 17:47:41.034711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:40.423 [2024-11-26 17:47:41.034722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:40.423 [2024-11-26 17:47:41.034732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.423 [2024-11-26 17:47:41.034746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:40.423 [2024-11-26 17:47:41.034756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:50:40.423 [2024-11-26 17:47:41.034766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.423 [2024-11-26 17:47:41.071315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.423 [2024-11-26 17:47:41.071351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:40.423 [2024-11-26 17:47:41.071364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.556 ms 00:50:40.423 [2024-11-26 17:47:41.071374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.423 [2024-11-26 17:47:41.071489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.423 [2024-11-26 17:47:41.071530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:50:40.423 [2024-11-26 17:47:41.071542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:50:40.423 [2024-11-26 17:47:41.071552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.137737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.137775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:40.682 [2024-11-26 17:47:41.137793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.270 ms 00:50:40.682 [2024-11-26 17:47:41.137804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.137896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.137909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:40.682 [2024-11-26 17:47:41.137920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:50:40.682 [2024-11-26 17:47:41.137930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.138360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.138373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:40.682 [2024-11-26 17:47:41.138390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:50:40.682 [2024-11-26 17:47:41.138400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.138526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.138542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:40.682 [2024-11-26 17:47:41.138571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:50:40.682 [2024-11-26 17:47:41.138581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.155067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.155101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:40.682 [2024-11-26 17:47:41.155114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.491 ms 00:50:40.682 [2024-11-26 17:47:41.155125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.172415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:40.682 [2024-11-26 17:47:41.172454] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:50:40.682 [2024-11-26 17:47:41.172469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.172480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:50:40.682 [2024-11-26 17:47:41.172491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.277 ms 00:50:40.682 [2024-11-26 17:47:41.172522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.200963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.201002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:50:40.682 [2024-11-26 17:47:41.201015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.410 ms 00:50:40.682 [2024-11-26 17:47:41.201026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.218266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.218303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:50:40.682 [2024-11-26 17:47:41.218316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.190 ms 00:50:40.682 [2024-11-26 17:47:41.218326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.235579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.682 [2024-11-26 17:47:41.235802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:50:40.682 [2024-11-26 17:47:41.235822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.210 ms 00:50:40.682 [2024-11-26 17:47:41.235833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.682 [2024-11-26 17:47:41.236554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.236578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:50:40.683 [2024-11-26 17:47:41.236590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:50:40.683 [2024-11-26 17:47:41.236600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.322171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.322222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:50:40.683 [2024-11-26 17:47:41.322236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.683 ms 00:50:40.683 [2024-11-26 17:47:41.322247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.333144] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:50:40.683 [2024-11-26 17:47:41.348519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.348753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:50:40.683 [2024-11-26 17:47:41.348776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.235 ms 00:50:40.683 [2024-11-26 17:47:41.348793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.348895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.348909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:50:40.683 [2024-11-26 17:47:41.348920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:40.683 [2024-11-26 17:47:41.348931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.348981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.348992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:50:40.683 [2024-11-26 17:47:41.349003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:50:40.683 [2024-11-26 17:47:41.349017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.349051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.349064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:50:40.683 [2024-11-26 17:47:41.349074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:50:40.683 [2024-11-26 17:47:41.349085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.683 [2024-11-26 17:47:41.349121] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:50:40.683 [2024-11-26 17:47:41.349133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.683 [2024-11-26 17:47:41.349144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:50:40.683 [2024-11-26 17:47:41.349154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:50:40.683 [2024-11-26 17:47:41.349164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.942 [2024-11-26 17:47:41.382946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.942 [2024-11-26 17:47:41.382985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:50:40.942 [2024-11-26 17:47:41.382999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.815 ms 00:50:40.942 [2024-11-26 17:47:41.383009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.942 [2024-11-26 17:47:41.383117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:40.942 [2024-11-26 17:47:41.383130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:50:40.942 [2024-11-26 17:47:41.383142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:50:40.942 [2024-11-26 17:47:41.383153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:40.942 [2024-11-26 17:47:41.384056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:40.942 [2024-11-26 17:47:41.388004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.929 ms, result 0 00:50:40.942 [2024-11-26 17:47:41.388765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:40.942 [2024-11-26 17:47:41.406147] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:41.880  [2024-11-26T17:47:43.510Z] Copying: 28/256 [MB] (28 MBps) [2024-11-26T17:47:44.475Z] Copying: 54/256 [MB] (26 MBps) [2024-11-26T17:47:45.856Z] Copying: 81/256 [MB] (26 MBps) [2024-11-26T17:47:46.795Z] Copying: 107/256 [MB] (26 MBps) [2024-11-26T17:47:47.733Z] Copying: 133/256 [MB] (25 MBps) [2024-11-26T17:47:48.672Z] Copying: 159/256 [MB] (26 MBps) [2024-11-26T17:47:49.610Z] Copying: 186/256 [MB] (26 MBps) [2024-11-26T17:47:50.548Z] Copying: 212/256 [MB] (25 MBps) [2024-11-26T17:47:51.487Z] Copying: 238/256 [MB] (25 MBps) [2024-11-26T17:47:51.487Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-26 17:47:51.452030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:50:50.793 [2024-11-26 17:47:51.468073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:50.793 [2024-11-26 17:47:51.468141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:50:50.793 [2024-11-26 17:47:51.468171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:50:50.793 [2024-11-26 17:47:51.468188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:50.793 [2024-11-26 17:47:51.468225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:50:50.793 [2024-11-26 17:47:51.472135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:50.793 [2024-11-26 17:47:51.472190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:50:50.793 [2024-11-26 17:47:51.472210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.888 ms 00:50:50.793 [2024-11-26 17:47:51.472227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:50.793 [2024-11-26 17:47:51.472607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:50.793 [2024-11-26 17:47:51.472644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:50:50.793 [2024-11-26 17:47:51.472665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:50:50.793 [2024-11-26 17:47:51.472684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:50.793 [2024-11-26 17:47:51.476084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:50.793 [2024-11-26 17:47:51.476160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:50:50.793 [2024-11-26 17:47:51.476181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:50:50.793 [2024-11-26 17:47:51.476199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:50.793 [2024-11-26 17:47:51.483026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:50.793 [2024-11-26 17:47:51.483072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:50:50.793 [2024-11-26 17:47:51.483088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.794 ms 00:50:50.793 [2024-11-26 17:47:51.483102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.520146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.520186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:50:51.054 [2024-11-26 17:47:51.520200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.015 ms 00:50:51.054 [2024-11-26 17:47:51.520211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.541659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.541697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:50:51.054 [2024-11-26 17:47:51.541717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.416 ms 00:50:51.054 [2024-11-26 17:47:51.541727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.541861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.541876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:50:51.054 [2024-11-26 17:47:51.541899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:50:51.054 [2024-11-26 17:47:51.541909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.577811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.577849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:50:51.054 [2024-11-26 17:47:51.577863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.941 ms 00:50:51.054 [2024-11-26 17:47:51.577872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.613513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.613548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:50:51.054 [2024-11-26 17:47:51.613560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.638 ms 00:50:51.054 [2024-11-26 17:47:51.613569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.647956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.647992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:50:51.054 [2024-11-26 17:47:51.648005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.387 ms 00:50:51.054 [2024-11-26 17:47:51.648015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.682682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.054 [2024-11-26 17:47:51.682866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:50:51.054 [2024-11-26 17:47:51.682888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.642 ms 00:50:51.054 [2024-11-26 17:47:51.682898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.054 [2024-11-26 17:47:51.682957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:50:51.054 [2024-11-26 17:47:51.682975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.682988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:50:51.054 [2024-11-26 17:47:51.683472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.683993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:50:51.055 [2024-11-26 17:47:51.684156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:50:51.055 [2024-11-26 17:47:51.684167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 374db20d-0c07-4115-a3e3-8f48851ecd1a 00:50:51.055 [2024-11-26 17:47:51.684178] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:50:51.055 [2024-11-26 17:47:51.684188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:50:51.055 [2024-11-26 17:47:51.684199] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:50:51.055 [2024-11-26 17:47:51.684210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:50:51.055 [2024-11-26 17:47:51.684220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:50:51.055 [2024-11-26 17:47:51.684231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:50:51.055 [2024-11-26 17:47:51.684247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:50:51.055 [2024-11-26 17:47:51.684256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:50:51.055 [2024-11-26 17:47:51.684265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:50:51.055 [2024-11-26 17:47:51.684275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.055 [2024-11-26 17:47:51.684286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:50:51.055 [2024-11-26 17:47:51.684297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:50:51.055 [2024-11-26 17:47:51.684308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.055 [2024-11-26 17:47:51.703465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.055 [2024-11-26 17:47:51.703515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:50:51.055 [2024-11-26 17:47:51.703529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.168 ms 00:50:51.055 [2024-11-26 17:47:51.703539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.055 [2024-11-26 17:47:51.704081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:51.055 [2024-11-26 17:47:51.704101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:50:51.055 [2024-11-26 17:47:51.704113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:50:51.055 [2024-11-26 17:47:51.704124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.756923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.757087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:50:51.315 [2024-11-26 17:47:51.757109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.757127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.757206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.757218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:50:51.315 [2024-11-26 17:47:51.757229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.757239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.757289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.757303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:50:51.315 [2024-11-26 17:47:51.757314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.757324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.757348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.757359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:50:51.315 [2024-11-26 17:47:51.757370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.757380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.882588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.882646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:50:51.315 [2024-11-26 17:47:51.882662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.882672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.982836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.982893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:50:51.315 [2024-11-26 17:47:51.982909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.982920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.983015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.983034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:51.315 [2024-11-26 17:47:51.983052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.983068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.983114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.983143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:51.315 [2024-11-26 17:47:51.983161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.983177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.983317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.983342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:51.315 [2024-11-26 17:47:51.983361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.315 [2024-11-26 17:47:51.983390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.315 [2024-11-26 17:47:51.983486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.315 [2024-11-26 17:47:51.983549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:50:51.316 [2024-11-26 17:47:51.983579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.316 [2024-11-26 17:47:51.983599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.316 [2024-11-26 17:47:51.983669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.316 [2024-11-26 17:47:51.983691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:51.316 [2024-11-26 17:47:51.983710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.316 [2024-11-26 17:47:51.983727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.316 [2024-11-26 17:47:51.983795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:50:51.316 [2024-11-26 17:47:51.983823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:51.316 [2024-11-26 17:47:51.983840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:50:51.316 [2024-11-26 17:47:51.983858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:51.316 [2024-11-26 17:47:51.984081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 516.850 ms, result 0 00:50:52.694 00:50:52.694 00:50:52.694 17:47:53 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:50:52.953 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:50:52.953 17:47:53 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78869 00:50:52.953 17:47:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78869 ']' 00:50:52.953 17:47:53 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78869 00:50:52.954 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78869) - No such process 00:50:52.954 Process with pid 78869 is not found 00:50:52.954 17:47:53 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78869 is not found' 00:50:52.954 ************************************ 00:50:52.954 END TEST ftl_trim 00:50:52.954 ************************************ 00:50:52.954 00:50:52.954 real 1m12.246s 00:50:52.954 user 1m40.898s 00:50:52.954 sys 0m7.211s 00:50:52.954 17:47:53 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:50:52.954 17:47:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:50:53.225 17:47:53 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:50:53.225 17:47:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:50:53.225 17:47:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:50:53.225 17:47:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:53.225 ************************************ 00:50:53.225 START TEST ftl_restore 00:50:53.225 ************************************ 00:50:53.225 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:50:53.225 * Looking for test storage... 00:50:53.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:50:53.225 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:50:53.225 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:50:53.225 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:50:53.225 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:53.225 17:47:53 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:53.498 17:47:53 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:50:53.498 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:53.498 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:50:53.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:53.498 --rc genhtml_branch_coverage=1 00:50:53.498 --rc genhtml_function_coverage=1 00:50:53.498 --rc genhtml_legend=1 00:50:53.498 --rc geninfo_all_blocks=1 00:50:53.498 --rc geninfo_unexecuted_blocks=1 00:50:53.498 00:50:53.498 ' 00:50:53.498 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:50:53.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:53.498 --rc genhtml_branch_coverage=1 00:50:53.498 --rc genhtml_function_coverage=1 00:50:53.498 --rc genhtml_legend=1 00:50:53.498 --rc geninfo_all_blocks=1 00:50:53.498 --rc geninfo_unexecuted_blocks=1 00:50:53.498 00:50:53.498 ' 00:50:53.498 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:50:53.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:53.498 --rc genhtml_branch_coverage=1 00:50:53.498 --rc genhtml_function_coverage=1 00:50:53.498 --rc genhtml_legend=1 00:50:53.498 --rc geninfo_all_blocks=1 00:50:53.498 --rc geninfo_unexecuted_blocks=1 00:50:53.498 00:50:53.498 ' 00:50:53.498 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:50:53.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:53.498 --rc genhtml_branch_coverage=1 00:50:53.498 --rc genhtml_function_coverage=1 00:50:53.498 --rc genhtml_legend=1 00:50:53.498 --rc geninfo_all_blocks=1 00:50:53.498 --rc geninfo_unexecuted_blocks=1 00:50:53.498 00:50:53.498 ' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:50:53.498 17:47:53 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.mZPYkqTt6P 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79147 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79147 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79147 ']' 00:50:53.499 17:47:53 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:53.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:50:53.499 17:47:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:50:53.499 [2024-11-26 17:47:54.097606] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:50:53.499 [2024-11-26 17:47:54.097768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79147 ] 00:50:53.757 [2024-11-26 17:47:54.283123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:53.757 [2024-11-26 17:47:54.435865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:50:55.136 17:47:55 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:50:55.136 17:47:55 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:50:55.396 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:55.396 { 00:50:55.396 "name": "nvme0n1", 00:50:55.396 "aliases": [ 00:50:55.396 "47abdcd4-6767-4080-ab0e-40b72683ef63" 00:50:55.396 ], 00:50:55.396 "product_name": "NVMe disk", 00:50:55.396 "block_size": 4096, 00:50:55.396 "num_blocks": 1310720, 00:50:55.396 "uuid": "47abdcd4-6767-4080-ab0e-40b72683ef63", 00:50:55.396 "numa_id": -1, 00:50:55.396 "assigned_rate_limits": { 00:50:55.396 "rw_ios_per_sec": 0, 00:50:55.396 "rw_mbytes_per_sec": 0, 00:50:55.396 "r_mbytes_per_sec": 0, 00:50:55.396 "w_mbytes_per_sec": 0 00:50:55.396 }, 00:50:55.396 "claimed": true, 00:50:55.396 "claim_type": "read_many_write_one", 00:50:55.396 "zoned": false, 00:50:55.396 "supported_io_types": { 00:50:55.396 "read": true, 00:50:55.396 "write": true, 00:50:55.396 "unmap": true, 00:50:55.396 "flush": true, 00:50:55.396 "reset": true, 00:50:55.396 "nvme_admin": true, 00:50:55.396 "nvme_io": true, 00:50:55.396 "nvme_io_md": false, 00:50:55.396 "write_zeroes": true, 00:50:55.396 "zcopy": false, 00:50:55.396 "get_zone_info": false, 00:50:55.396 "zone_management": false, 00:50:55.396 "zone_append": false, 00:50:55.396 "compare": true, 00:50:55.396 "compare_and_write": false, 00:50:55.396 "abort": true, 00:50:55.396 "seek_hole": false, 00:50:55.396 "seek_data": false, 00:50:55.396 "copy": true, 00:50:55.396 "nvme_iov_md": false 00:50:55.396 }, 00:50:55.396 "driver_specific": { 00:50:55.396 "nvme": [ 00:50:55.396 { 00:50:55.396 "pci_address": "0000:00:11.0", 00:50:55.396 "trid": { 00:50:55.396 "trtype": "PCIe", 00:50:55.396 "traddr": "0000:00:11.0" 00:50:55.396 }, 00:50:55.396 "ctrlr_data": { 00:50:55.396 "cntlid": 0, 00:50:55.396 "vendor_id": "0x1b36", 00:50:55.396 "model_number": "QEMU NVMe Ctrl", 00:50:55.396 "serial_number": "12341", 00:50:55.396 "firmware_revision": "8.0.0", 00:50:55.396 "subnqn": "nqn.2019-08.org.qemu:12341", 00:50:55.396 "oacs": { 00:50:55.396 "security": 0, 00:50:55.396 "format": 1, 00:50:55.396 "firmware": 0, 00:50:55.396 "ns_manage": 1 00:50:55.396 }, 00:50:55.396 "multi_ctrlr": false, 00:50:55.396 "ana_reporting": false 00:50:55.396 }, 00:50:55.396 "vs": { 00:50:55.396 "nvme_version": "1.4" 00:50:55.396 }, 00:50:55.396 "ns_data": { 00:50:55.396 "id": 1, 00:50:55.396 "can_share": false 00:50:55.396 } 00:50:55.396 } 00:50:55.396 ], 00:50:55.396 "mp_policy": "active_passive" 00:50:55.396 } 00:50:55.396 } 00:50:55.396 ]' 00:50:55.396 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:55.396 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:50:55.396 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:55.655 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:50:55.655 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:50:55.655 17:47:56 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:50:55.655 17:47:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:50:55.655 17:47:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:50:55.655 17:47:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:50:55.655 17:47:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:50:55.655 17:47:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:50:55.914 17:47:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=2eb553cf-fc9f-40a8-861f-795083f03615 00:50:55.914 17:47:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:50:55.914 17:47:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2eb553cf-fc9f-40a8-861f-795083f03615 00:50:55.914 17:47:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:50:56.173 17:47:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=52f4a77b-704d-480d-9ace-1e790dcc70a6 00:50:56.173 17:47:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52f4a77b-704d-480d-9ace-1e790dcc70a6 00:50:56.432 17:47:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=982f6426-1897-4761-a26c-70426a70941b 00:50:56.432 17:47:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:50:56.432 17:47:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 982f6426-1897-4761-a26c-70426a70941b 00:50:56.433 17:47:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:50:56.433 17:47:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:50:56.433 17:47:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=982f6426-1897-4761-a26c-70426a70941b 00:50:56.433 17:47:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:50:56.433 17:47:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 982f6426-1897-4761-a26c-70426a70941b 00:50:56.433 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=982f6426-1897-4761-a26c-70426a70941b 00:50:56.433 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:56.433 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:50:56.433 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:50:56.433 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 982f6426-1897-4761-a26c-70426a70941b 00:50:56.691 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:56.691 { 00:50:56.691 "name": "982f6426-1897-4761-a26c-70426a70941b", 00:50:56.691 "aliases": [ 00:50:56.691 "lvs/nvme0n1p0" 00:50:56.691 ], 00:50:56.691 "product_name": "Logical Volume", 00:50:56.691 "block_size": 4096, 00:50:56.691 "num_blocks": 26476544, 00:50:56.691 "uuid": "982f6426-1897-4761-a26c-70426a70941b", 00:50:56.691 "assigned_rate_limits": { 00:50:56.691 "rw_ios_per_sec": 0, 00:50:56.691 "rw_mbytes_per_sec": 0, 00:50:56.691 "r_mbytes_per_sec": 0, 00:50:56.691 "w_mbytes_per_sec": 0 00:50:56.691 }, 00:50:56.691 "claimed": false, 00:50:56.691 "zoned": false, 00:50:56.691 "supported_io_types": { 00:50:56.691 "read": true, 00:50:56.691 "write": true, 00:50:56.691 "unmap": true, 00:50:56.691 "flush": false, 00:50:56.691 "reset": true, 00:50:56.691 "nvme_admin": false, 00:50:56.691 "nvme_io": false, 00:50:56.691 "nvme_io_md": false, 00:50:56.691 "write_zeroes": true, 00:50:56.691 "zcopy": false, 00:50:56.691 "get_zone_info": false, 00:50:56.691 "zone_management": false, 00:50:56.691 "zone_append": false, 00:50:56.691 "compare": false, 00:50:56.691 "compare_and_write": false, 00:50:56.691 "abort": false, 00:50:56.691 "seek_hole": true, 00:50:56.691 "seek_data": true, 00:50:56.691 "copy": false, 00:50:56.691 "nvme_iov_md": false 00:50:56.692 }, 00:50:56.692 "driver_specific": { 00:50:56.692 "lvol": { 00:50:56.692 "lvol_store_uuid": "52f4a77b-704d-480d-9ace-1e790dcc70a6", 00:50:56.692 "base_bdev": "nvme0n1", 00:50:56.692 "thin_provision": true, 00:50:56.692 "num_allocated_clusters": 0, 00:50:56.692 "snapshot": false, 00:50:56.692 "clone": false, 00:50:56.692 "esnap_clone": false 00:50:56.692 } 00:50:56.692 } 00:50:56.692 } 00:50:56.692 ]' 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:56.692 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:50:56.692 17:47:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:50:56.692 17:47:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:50:56.692 17:47:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:50:56.950 17:47:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:50:56.950 17:47:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:50:56.950 17:47:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 982f6426-1897-4761-a26c-70426a70941b 00:50:56.951 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=982f6426-1897-4761-a26c-70426a70941b 00:50:56.951 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:56.951 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:50:56.951 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:50:56.951 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 982f6426-1897-4761-a26c-70426a70941b 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:57.210 { 00:50:57.210 "name": "982f6426-1897-4761-a26c-70426a70941b", 00:50:57.210 "aliases": [ 00:50:57.210 "lvs/nvme0n1p0" 00:50:57.210 ], 00:50:57.210 "product_name": "Logical Volume", 00:50:57.210 "block_size": 4096, 00:50:57.210 "num_blocks": 26476544, 00:50:57.210 "uuid": "982f6426-1897-4761-a26c-70426a70941b", 00:50:57.210 "assigned_rate_limits": { 00:50:57.210 "rw_ios_per_sec": 0, 00:50:57.210 "rw_mbytes_per_sec": 0, 00:50:57.210 "r_mbytes_per_sec": 0, 00:50:57.210 "w_mbytes_per_sec": 0 00:50:57.210 }, 00:50:57.210 "claimed": false, 00:50:57.210 "zoned": false, 00:50:57.210 "supported_io_types": { 00:50:57.210 "read": true, 00:50:57.210 "write": true, 00:50:57.210 "unmap": true, 00:50:57.210 "flush": false, 00:50:57.210 "reset": true, 00:50:57.210 "nvme_admin": false, 00:50:57.210 "nvme_io": false, 00:50:57.210 "nvme_io_md": false, 00:50:57.210 "write_zeroes": true, 00:50:57.210 "zcopy": false, 00:50:57.210 "get_zone_info": false, 00:50:57.210 "zone_management": false, 00:50:57.210 "zone_append": false, 00:50:57.210 "compare": false, 00:50:57.210 "compare_and_write": false, 00:50:57.210 "abort": false, 00:50:57.210 "seek_hole": true, 00:50:57.210 "seek_data": true, 00:50:57.210 "copy": false, 00:50:57.210 "nvme_iov_md": false 00:50:57.210 }, 00:50:57.210 "driver_specific": { 00:50:57.210 "lvol": { 00:50:57.210 "lvol_store_uuid": "52f4a77b-704d-480d-9ace-1e790dcc70a6", 00:50:57.210 "base_bdev": "nvme0n1", 00:50:57.210 "thin_provision": true, 00:50:57.210 "num_allocated_clusters": 0, 00:50:57.210 "snapshot": false, 00:50:57.210 "clone": false, 00:50:57.210 "esnap_clone": false 00:50:57.210 } 00:50:57.210 } 00:50:57.210 } 00:50:57.210 ]' 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:57.210 17:47:57 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:50:57.469 17:47:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:50:57.469 17:47:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:50:57.469 17:47:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:50:57.469 17:47:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 982f6426-1897-4761-a26c-70426a70941b 00:50:57.469 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=982f6426-1897-4761-a26c-70426a70941b 00:50:57.469 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:50:57.469 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:50:57.469 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:50:57.469 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 982f6426-1897-4761-a26c-70426a70941b 00:50:57.728 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:50:57.728 { 00:50:57.728 "name": "982f6426-1897-4761-a26c-70426a70941b", 00:50:57.728 "aliases": [ 00:50:57.728 "lvs/nvme0n1p0" 00:50:57.728 ], 00:50:57.728 "product_name": "Logical Volume", 00:50:57.728 "block_size": 4096, 00:50:57.728 "num_blocks": 26476544, 00:50:57.728 "uuid": "982f6426-1897-4761-a26c-70426a70941b", 00:50:57.728 "assigned_rate_limits": { 00:50:57.728 "rw_ios_per_sec": 0, 00:50:57.728 "rw_mbytes_per_sec": 0, 00:50:57.728 "r_mbytes_per_sec": 0, 00:50:57.728 "w_mbytes_per_sec": 0 00:50:57.728 }, 00:50:57.728 "claimed": false, 00:50:57.728 "zoned": false, 00:50:57.728 "supported_io_types": { 00:50:57.728 "read": true, 00:50:57.728 "write": true, 00:50:57.728 "unmap": true, 00:50:57.728 "flush": false, 00:50:57.728 "reset": true, 00:50:57.728 "nvme_admin": false, 00:50:57.728 "nvme_io": false, 00:50:57.728 "nvme_io_md": false, 00:50:57.728 "write_zeroes": true, 00:50:57.728 "zcopy": false, 00:50:57.728 "get_zone_info": false, 00:50:57.728 "zone_management": false, 00:50:57.728 "zone_append": false, 00:50:57.728 "compare": false, 00:50:57.728 "compare_and_write": false, 00:50:57.728 "abort": false, 00:50:57.728 "seek_hole": true, 00:50:57.728 "seek_data": true, 00:50:57.728 "copy": false, 00:50:57.728 "nvme_iov_md": false 00:50:57.728 }, 00:50:57.728 "driver_specific": { 00:50:57.728 "lvol": { 00:50:57.728 "lvol_store_uuid": "52f4a77b-704d-480d-9ace-1e790dcc70a6", 00:50:57.728 "base_bdev": "nvme0n1", 00:50:57.728 "thin_provision": true, 00:50:57.728 "num_allocated_clusters": 0, 00:50:57.728 "snapshot": false, 00:50:57.728 "clone": false, 00:50:57.728 "esnap_clone": false 00:50:57.728 } 00:50:57.728 } 00:50:57.728 } 00:50:57.728 ]' 00:50:57.728 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:50:57.728 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:50:57.728 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:50:57.989 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:50:57.989 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:50:57.989 17:47:58 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 982f6426-1897-4761-a26c-70426a70941b --l2p_dram_limit 10' 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:50:57.989 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:50:57.989 17:47:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 982f6426-1897-4761-a26c-70426a70941b --l2p_dram_limit 10 -c nvc0n1p0 00:50:57.989 [2024-11-26 17:47:58.635705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.635997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:50:57.989 [2024-11-26 17:47:58.636124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:50:57.989 [2024-11-26 17:47:58.636166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.636322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.636414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:50:57.989 [2024-11-26 17:47:58.636508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:50:57.989 [2024-11-26 17:47:58.636588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.636649] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:50:57.989 [2024-11-26 17:47:58.637749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:50:57.989 [2024-11-26 17:47:58.637899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.637982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:50:57.989 [2024-11-26 17:47:58.638022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:50:57.989 [2024-11-26 17:47:58.638054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.638193] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:50:57.989 [2024-11-26 17:47:58.639715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.639855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:50:57.989 [2024-11-26 17:47:58.639875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:50:57.989 [2024-11-26 17:47:58.639889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.647317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.647354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:50:57.989 [2024-11-26 17:47:58.647367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.371 ms 00:50:57.989 [2024-11-26 17:47:58.647385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.647480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.647518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:50:57.989 [2024-11-26 17:47:58.647532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:50:57.989 [2024-11-26 17:47:58.647549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.647599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.647615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:50:57.989 [2024-11-26 17:47:58.647630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:50:57.989 [2024-11-26 17:47:58.647643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.647666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:50:57.989 [2024-11-26 17:47:58.652565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.652614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:50:57.989 [2024-11-26 17:47:58.652633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:50:57.989 [2024-11-26 17:47:58.652643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.652677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.652688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:50:57.989 [2024-11-26 17:47:58.652702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:50:57.989 [2024-11-26 17:47:58.652712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.652754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:50:57.989 [2024-11-26 17:47:58.652876] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:50:57.989 [2024-11-26 17:47:58.652897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:50:57.989 [2024-11-26 17:47:58.652911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:50:57.989 [2024-11-26 17:47:58.652927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:50:57.989 [2024-11-26 17:47:58.652940] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:50:57.989 [2024-11-26 17:47:58.652955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:50:57.989 [2024-11-26 17:47:58.652968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:50:57.989 [2024-11-26 17:47:58.652980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:50:57.989 [2024-11-26 17:47:58.652990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:50:57.989 [2024-11-26 17:47:58.653003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.653024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:50:57.989 [2024-11-26 17:47:58.653038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:50:57.989 [2024-11-26 17:47:58.653049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.653125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.989 [2024-11-26 17:47:58.653136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:50:57.989 [2024-11-26 17:47:58.653150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:50:57.989 [2024-11-26 17:47:58.653160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.989 [2024-11-26 17:47:58.653253] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:50:57.989 [2024-11-26 17:47:58.653267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:50:57.989 [2024-11-26 17:47:58.653280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:57.989 [2024-11-26 17:47:58.653291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.989 [2024-11-26 17:47:58.653305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:50:57.989 [2024-11-26 17:47:58.653316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:50:57.990 [2024-11-26 17:47:58.653349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:57.990 [2024-11-26 17:47:58.653371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:50:57.990 [2024-11-26 17:47:58.653382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:50:57.990 [2024-11-26 17:47:58.653394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:50:57.990 [2024-11-26 17:47:58.653404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:50:57.990 [2024-11-26 17:47:58.653416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:50:57.990 [2024-11-26 17:47:58.653425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:50:57.990 [2024-11-26 17:47:58.653449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:50:57.990 [2024-11-26 17:47:58.653484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:50:57.990 [2024-11-26 17:47:58.653533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:50:57.990 [2024-11-26 17:47:58.653565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:50:57.990 [2024-11-26 17:47:58.653596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:50:57.990 [2024-11-26 17:47:58.653653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:57.990 [2024-11-26 17:47:58.653674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:50:57.990 [2024-11-26 17:47:58.653684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:50:57.990 [2024-11-26 17:47:58.653695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:50:57.990 [2024-11-26 17:47:58.653704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:50:57.990 [2024-11-26 17:47:58.653717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:50:57.990 [2024-11-26 17:47:58.653726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:50:57.990 [2024-11-26 17:47:58.653752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:50:57.990 [2024-11-26 17:47:58.653763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:50:57.990 [2024-11-26 17:47:58.653784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:50:57.990 [2024-11-26 17:47:58.653794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:50:57.990 [2024-11-26 17:47:58.653818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:50:57.990 [2024-11-26 17:47:58.653832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:50:57.990 [2024-11-26 17:47:58.653842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:50:57.990 [2024-11-26 17:47:58.653854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:50:57.990 [2024-11-26 17:47:58.653863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:50:57.990 [2024-11-26 17:47:58.653875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:50:57.990 [2024-11-26 17:47:58.653889] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:50:57.990 [2024-11-26 17:47:58.653907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.653919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:50:57.990 [2024-11-26 17:47:58.653931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:50:57.990 [2024-11-26 17:47:58.653942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:50:57.990 [2024-11-26 17:47:58.653954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:50:57.990 [2024-11-26 17:47:58.653965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:50:57.990 [2024-11-26 17:47:58.653979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:50:57.990 [2024-11-26 17:47:58.653990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:50:57.990 [2024-11-26 17:47:58.654003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:50:57.990 [2024-11-26 17:47:58.654013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:50:57.990 [2024-11-26 17:47:58.654027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:50:57.990 [2024-11-26 17:47:58.654085] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:50:57.990 [2024-11-26 17:47:58.654098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:57.990 [2024-11-26 17:47:58.654121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:50:57.990 [2024-11-26 17:47:58.654132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:50:57.990 [2024-11-26 17:47:58.654145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:50:57.990 [2024-11-26 17:47:58.654157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:50:57.990 [2024-11-26 17:47:58.654169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:50:57.990 [2024-11-26 17:47:58.654181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:50:57.990 [2024-11-26 17:47:58.654194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:50:57.990 [2024-11-26 17:47:58.654234] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:50:57.990 [2024-11-26 17:47:58.654252] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:51:02.184 [2024-11-26 17:48:02.405810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.405879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:51:02.184 [2024-11-26 17:48:02.405897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3757.665 ms 00:51:02.184 [2024-11-26 17:48:02.405911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.447029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.447276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:02.184 [2024-11-26 17:48:02.447302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.896 ms 00:51:02.184 [2024-11-26 17:48:02.447317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.447473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.447492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:02.184 [2024-11-26 17:48:02.447519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:51:02.184 [2024-11-26 17:48:02.447540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.498287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.498524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:02.184 [2024-11-26 17:48:02.498550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.784 ms 00:51:02.184 [2024-11-26 17:48:02.498566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.498615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.498630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:02.184 [2024-11-26 17:48:02.498641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:51:02.184 [2024-11-26 17:48:02.498680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.499179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.499214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:02.184 [2024-11-26 17:48:02.499226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:51:02.184 [2024-11-26 17:48:02.499240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.499340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.499359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:02.184 [2024-11-26 17:48:02.499370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:51:02.184 [2024-11-26 17:48:02.499397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.520270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.520319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:02.184 [2024-11-26 17:48:02.520335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.885 ms 00:51:02.184 [2024-11-26 17:48:02.520350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.184 [2024-11-26 17:48:02.548475] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:02.184 [2024-11-26 17:48:02.551697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.184 [2024-11-26 17:48:02.551731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:02.185 [2024-11-26 17:48:02.551751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.295 ms 00:51:02.185 [2024-11-26 17:48:02.551763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.652025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.652078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:51:02.185 [2024-11-26 17:48:02.652099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.377 ms 00:51:02.185 [2024-11-26 17:48:02.652111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.652302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.652317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:02.185 [2024-11-26 17:48:02.652334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:51:02.185 [2024-11-26 17:48:02.652345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.689348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.689393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:51:02.185 [2024-11-26 17:48:02.689413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.005 ms 00:51:02.185 [2024-11-26 17:48:02.689424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.725573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.725611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:51:02.185 [2024-11-26 17:48:02.725630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.155 ms 00:51:02.185 [2024-11-26 17:48:02.725642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.726363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.726394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:02.185 [2024-11-26 17:48:02.726412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:51:02.185 [2024-11-26 17:48:02.726424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.832764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.832983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:51:02.185 [2024-11-26 17:48:02.833016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.454 ms 00:51:02.185 [2024-11-26 17:48:02.833028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.185 [2024-11-26 17:48:02.870987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.185 [2024-11-26 17:48:02.871036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:51:02.185 [2024-11-26 17:48:02.871055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.929 ms 00:51:02.185 [2024-11-26 17:48:02.871067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.444 [2024-11-26 17:48:02.908714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.444 [2024-11-26 17:48:02.908766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:51:02.444 [2024-11-26 17:48:02.908784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.658 ms 00:51:02.444 [2024-11-26 17:48:02.908794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.444 [2024-11-26 17:48:02.946218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.444 [2024-11-26 17:48:02.946261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:02.444 [2024-11-26 17:48:02.946279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.433 ms 00:51:02.444 [2024-11-26 17:48:02.946290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.444 [2024-11-26 17:48:02.946340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.444 [2024-11-26 17:48:02.946353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:02.444 [2024-11-26 17:48:02.946370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:51:02.444 [2024-11-26 17:48:02.946382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.444 [2024-11-26 17:48:02.946486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.444 [2024-11-26 17:48:02.946523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:02.444 [2024-11-26 17:48:02.946539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:51:02.444 [2024-11-26 17:48:02.946550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.444 [2024-11-26 17:48:02.947549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4318.394 ms, result 0 00:51:02.444 { 00:51:02.444 "name": "ftl0", 00:51:02.444 "uuid": "aa0c69f4-4c7a-4c1b-bad9-b00bee17d220" 00:51:02.444 } 00:51:02.444 17:48:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:51:02.444 17:48:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:51:02.704 17:48:03 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:51:02.704 17:48:03 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:51:02.966 [2024-11-26 17:48:03.406223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.406301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:02.966 [2024-11-26 17:48:03.406319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:51:02.966 [2024-11-26 17:48:03.406332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.406360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:02.966 [2024-11-26 17:48:03.410461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.410507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:02.966 [2024-11-26 17:48:03.410525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.083 ms 00:51:02.966 [2024-11-26 17:48:03.410537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.410792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.410808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:02.966 [2024-11-26 17:48:03.410823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:51:02.966 [2024-11-26 17:48:03.410834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.413353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.413389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:02.966 [2024-11-26 17:48:03.413404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:51:02.966 [2024-11-26 17:48:03.413414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.418417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.418456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:02.966 [2024-11-26 17:48:03.418471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.987 ms 00:51:02.966 [2024-11-26 17:48:03.418482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.454984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.455028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:02.966 [2024-11-26 17:48:03.455047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.455 ms 00:51:02.966 [2024-11-26 17:48:03.455057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.477256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.477299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:02.966 [2024-11-26 17:48:03.477318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.182 ms 00:51:02.966 [2024-11-26 17:48:03.477329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.477481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.966 [2024-11-26 17:48:03.477513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:02.966 [2024-11-26 17:48:03.477529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:51:02.966 [2024-11-26 17:48:03.477540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.966 [2024-11-26 17:48:03.514283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.967 [2024-11-26 17:48:03.514326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:02.967 [2024-11-26 17:48:03.514343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.774 ms 00:51:02.967 [2024-11-26 17:48:03.514353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.967 [2024-11-26 17:48:03.551096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.967 [2024-11-26 17:48:03.551139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:02.967 [2024-11-26 17:48:03.551156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.750 ms 00:51:02.967 [2024-11-26 17:48:03.551166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.967 [2024-11-26 17:48:03.587826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.967 [2024-11-26 17:48:03.587874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:02.967 [2024-11-26 17:48:03.587891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.667 ms 00:51:02.967 [2024-11-26 17:48:03.587902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.967 [2024-11-26 17:48:03.623798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.967 [2024-11-26 17:48:03.623991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:02.967 [2024-11-26 17:48:03.624018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.848 ms 00:51:02.967 [2024-11-26 17:48:03.624029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.967 [2024-11-26 17:48:03.624074] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:02.967 [2024-11-26 17:48:03.624092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:02.967 [2024-11-26 17:48:03.624825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.624995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:02.968 [2024-11-26 17:48:03.625398] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:02.968 [2024-11-26 17:48:03.625411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:51:02.968 [2024-11-26 17:48:03.625423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:51:02.968 [2024-11-26 17:48:03.625438] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:02.968 [2024-11-26 17:48:03.625453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:02.968 [2024-11-26 17:48:03.625467] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:02.968 [2024-11-26 17:48:03.625477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:02.968 [2024-11-26 17:48:03.625490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:02.968 [2024-11-26 17:48:03.625516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:02.968 [2024-11-26 17:48:03.625528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:02.968 [2024-11-26 17:48:03.625537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:02.968 [2024-11-26 17:48:03.625550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.968 [2024-11-26 17:48:03.625561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:02.968 [2024-11-26 17:48:03.625574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:51:02.968 [2024-11-26 17:48:03.625588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.968 [2024-11-26 17:48:03.645688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.968 [2024-11-26 17:48:03.645726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:02.968 [2024-11-26 17:48:03.645743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.054 ms 00:51:02.968 [2024-11-26 17:48:03.645754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:02.968 [2024-11-26 17:48:03.646296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:02.968 [2024-11-26 17:48:03.646315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:02.968 [2024-11-26 17:48:03.646332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:51:02.968 [2024-11-26 17:48:03.646342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.228 [2024-11-26 17:48:03.713823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.228 [2024-11-26 17:48:03.713876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:03.228 [2024-11-26 17:48:03.713892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.228 [2024-11-26 17:48:03.713904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.228 [2024-11-26 17:48:03.713976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.228 [2024-11-26 17:48:03.713987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:03.228 [2024-11-26 17:48:03.714005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.228 [2024-11-26 17:48:03.714016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.228 [2024-11-26 17:48:03.714111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.228 [2024-11-26 17:48:03.714125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:03.228 [2024-11-26 17:48:03.714139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.228 [2024-11-26 17:48:03.714149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.228 [2024-11-26 17:48:03.714175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.228 [2024-11-26 17:48:03.714186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:03.228 [2024-11-26 17:48:03.714199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.228 [2024-11-26 17:48:03.714211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.228 [2024-11-26 17:48:03.839762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.228 [2024-11-26 17:48:03.839825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:03.228 [2024-11-26 17:48:03.839845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.228 [2024-11-26 17:48:03.839857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.940787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.940853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:03.488 [2024-11-26 17:48:03.940876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.940887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:03.488 [2024-11-26 17:48:03.941040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:03.488 [2024-11-26 17:48:03.941148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:03.488 [2024-11-26 17:48:03.941301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:03.488 [2024-11-26 17:48:03.941378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:03.488 [2024-11-26 17:48:03.941459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:03.488 [2024-11-26 17:48:03.941555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:03.488 [2024-11-26 17:48:03.941568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:03.488 [2024-11-26 17:48:03.941579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:03.488 [2024-11-26 17:48:03.941730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.342 ms, result 0 00:51:03.488 true 00:51:03.488 17:48:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79147 00:51:03.488 17:48:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79147 ']' 00:51:03.488 17:48:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79147 00:51:03.488 17:48:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:51:03.488 17:48:03 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:03.488 17:48:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79147 00:51:03.488 killing process with pid 79147 00:51:03.488 17:48:04 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:03.488 17:48:04 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:03.488 17:48:04 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79147' 00:51:03.488 17:48:04 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79147 00:51:03.488 17:48:04 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79147 00:51:08.774 17:48:09 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:51:13.037 262144+0 records in 00:51:13.037 262144+0 records out 00:51:13.037 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.33498 s, 248 MB/s 00:51:13.037 17:48:13 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:51:14.938 17:48:15 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:51:14.938 [2024-11-26 17:48:15.354381] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:51:14.938 [2024-11-26 17:48:15.354745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79400 ] 00:51:14.938 [2024-11-26 17:48:15.550037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:15.197 [2024-11-26 17:48:15.694516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:15.767 [2024-11-26 17:48:16.152487] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:15.767 [2024-11-26 17:48:16.152663] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:51:15.767 [2024-11-26 17:48:16.337718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.337788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:51:15.767 [2024-11-26 17:48:16.337806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:51:15.767 [2024-11-26 17:48:16.337818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.337905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.337918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:15.767 [2024-11-26 17:48:16.337930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:51:15.767 [2024-11-26 17:48:16.337953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.337979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:51:15.767 [2024-11-26 17:48:16.339007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:51:15.767 [2024-11-26 17:48:16.339034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.339046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:15.767 [2024-11-26 17:48:16.339058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:51:15.767 [2024-11-26 17:48:16.339069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.341571] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:51:15.767 [2024-11-26 17:48:16.364240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.364320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:51:15.767 [2024-11-26 17:48:16.364341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.704 ms 00:51:15.767 [2024-11-26 17:48:16.364363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.364487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.364517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:51:15.767 [2024-11-26 17:48:16.364529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:51:15.767 [2024-11-26 17:48:16.364540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.378869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.379199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:15.767 [2024-11-26 17:48:16.379254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:51:15.767 [2024-11-26 17:48:16.379267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.379427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.379443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:15.767 [2024-11-26 17:48:16.379456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:51:15.767 [2024-11-26 17:48:16.379469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.379580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.379596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:51:15.767 [2024-11-26 17:48:16.379608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:51:15.767 [2024-11-26 17:48:16.379629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.379664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:51:15.767 [2024-11-26 17:48:16.385750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.385927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:15.767 [2024-11-26 17:48:16.385969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:51:15.767 [2024-11-26 17:48:16.385981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.386031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.386043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:51:15.767 [2024-11-26 17:48:16.386055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:51:15.767 [2024-11-26 17:48:16.386066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.386121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:51:15.767 [2024-11-26 17:48:16.386152] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:51:15.767 [2024-11-26 17:48:16.386206] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:51:15.767 [2024-11-26 17:48:16.386227] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:51:15.767 [2024-11-26 17:48:16.386324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:51:15.767 [2024-11-26 17:48:16.386340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:51:15.767 [2024-11-26 17:48:16.386354] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:51:15.767 [2024-11-26 17:48:16.386369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:51:15.767 [2024-11-26 17:48:16.386382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:51:15.767 [2024-11-26 17:48:16.386395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:51:15.767 [2024-11-26 17:48:16.386407] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:51:15.767 [2024-11-26 17:48:16.386422] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:51:15.767 [2024-11-26 17:48:16.386432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:51:15.767 [2024-11-26 17:48:16.386445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.386455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:51:15.767 [2024-11-26 17:48:16.386467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:51:15.767 [2024-11-26 17:48:16.386477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.386574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.767 [2024-11-26 17:48:16.386587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:51:15.767 [2024-11-26 17:48:16.386598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:51:15.767 [2024-11-26 17:48:16.386609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.767 [2024-11-26 17:48:16.386718] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:51:15.767 [2024-11-26 17:48:16.386734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:51:15.767 [2024-11-26 17:48:16.386746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:15.767 [2024-11-26 17:48:16.386758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.767 [2024-11-26 17:48:16.386769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:51:15.768 [2024-11-26 17:48:16.386780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:51:15.768 [2024-11-26 17:48:16.386800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:51:15.768 [2024-11-26 17:48:16.386811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:15.768 [2024-11-26 17:48:16.386830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:51:15.768 [2024-11-26 17:48:16.386841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:51:15.768 [2024-11-26 17:48:16.386851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:51:15.768 [2024-11-26 17:48:16.386872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:51:15.768 [2024-11-26 17:48:16.386883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:51:15.768 [2024-11-26 17:48:16.386892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:51:15.768 [2024-11-26 17:48:16.386913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:51:15.768 [2024-11-26 17:48:16.386923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:51:15.768 [2024-11-26 17:48:16.386942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:15.768 [2024-11-26 17:48:16.386962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:51:15.768 [2024-11-26 17:48:16.386972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:51:15.768 [2024-11-26 17:48:16.386983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:15.768 [2024-11-26 17:48:16.386993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:51:15.768 [2024-11-26 17:48:16.387002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:15.768 [2024-11-26 17:48:16.387021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:51:15.768 [2024-11-26 17:48:16.387030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:51:15.768 [2024-11-26 17:48:16.387049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:51:15.768 [2024-11-26 17:48:16.387058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:15.768 [2024-11-26 17:48:16.387076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:51:15.768 [2024-11-26 17:48:16.387086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:51:15.768 [2024-11-26 17:48:16.387095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:51:15.768 [2024-11-26 17:48:16.387115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:51:15.768 [2024-11-26 17:48:16.387124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:51:15.768 [2024-11-26 17:48:16.387132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:51:15.768 [2024-11-26 17:48:16.387150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:51:15.768 [2024-11-26 17:48:16.387159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387170] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:51:15.768 [2024-11-26 17:48:16.387180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:51:15.768 [2024-11-26 17:48:16.387190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:51:15.768 [2024-11-26 17:48:16.387200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:51:15.768 [2024-11-26 17:48:16.387211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:51:15.768 [2024-11-26 17:48:16.387221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:51:15.768 [2024-11-26 17:48:16.387231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:51:15.768 [2024-11-26 17:48:16.387240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:51:15.768 [2024-11-26 17:48:16.387249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:51:15.768 [2024-11-26 17:48:16.387259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:51:15.768 [2024-11-26 17:48:16.387271] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:51:15.768 [2024-11-26 17:48:16.387284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:51:15.768 [2024-11-26 17:48:16.387312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:51:15.768 [2024-11-26 17:48:16.387323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:51:15.768 [2024-11-26 17:48:16.387335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:51:15.768 [2024-11-26 17:48:16.387346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:51:15.768 [2024-11-26 17:48:16.387356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:51:15.768 [2024-11-26 17:48:16.387367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:51:15.768 [2024-11-26 17:48:16.387385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:51:15.768 [2024-11-26 17:48:16.387396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:51:15.768 [2024-11-26 17:48:16.387406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:51:15.768 [2024-11-26 17:48:16.387476] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:51:15.768 [2024-11-26 17:48:16.387488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:51:15.768 [2024-11-26 17:48:16.387524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:51:15.768 [2024-11-26 17:48:16.387537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:51:15.768 [2024-11-26 17:48:16.387548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:51:15.768 [2024-11-26 17:48:16.387561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.768 [2024-11-26 17:48:16.387573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:51:15.768 [2024-11-26 17:48:16.387585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:51:15.768 [2024-11-26 17:48:16.387596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.768 [2024-11-26 17:48:16.437686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.768 [2024-11-26 17:48:16.437755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:15.768 [2024-11-26 17:48:16.437795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.111 ms 00:51:15.768 [2024-11-26 17:48:16.437807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:15.768 [2024-11-26 17:48:16.437928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:15.768 [2024-11-26 17:48:16.437941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:51:15.768 [2024-11-26 17:48:16.437953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:51:15.768 [2024-11-26 17:48:16.437969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.028 [2024-11-26 17:48:16.499718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.028 [2024-11-26 17:48:16.499789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:16.028 [2024-11-26 17:48:16.499808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.720 ms 00:51:16.028 [2024-11-26 17:48:16.499820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.028 [2024-11-26 17:48:16.499904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.028 [2024-11-26 17:48:16.499926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:16.028 [2024-11-26 17:48:16.499938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:51:16.028 [2024-11-26 17:48:16.499948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.028 [2024-11-26 17:48:16.500835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.028 [2024-11-26 17:48:16.500853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:16.028 [2024-11-26 17:48:16.500866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:51:16.028 [2024-11-26 17:48:16.500876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.501036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.501052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:16.029 [2024-11-26 17:48:16.501070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:51:16.029 [2024-11-26 17:48:16.501081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.525006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.525085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:16.029 [2024-11-26 17:48:16.525103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.936 ms 00:51:16.029 [2024-11-26 17:48:16.525114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.546815] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:51:16.029 [2024-11-26 17:48:16.547108] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:51:16.029 [2024-11-26 17:48:16.547155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.547169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:51:16.029 [2024-11-26 17:48:16.547185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.864 ms 00:51:16.029 [2024-11-26 17:48:16.547196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.579952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.580060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:51:16.029 [2024-11-26 17:48:16.580080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.728 ms 00:51:16.029 [2024-11-26 17:48:16.580092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.601332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.601413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:51:16.029 [2024-11-26 17:48:16.601430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.160 ms 00:51:16.029 [2024-11-26 17:48:16.601457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.621317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.621394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:51:16.029 [2024-11-26 17:48:16.621412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.794 ms 00:51:16.029 [2024-11-26 17:48:16.621440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.029 [2024-11-26 17:48:16.622352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.029 [2024-11-26 17:48:16.622390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:51:16.029 [2024-11-26 17:48:16.622405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:51:16.029 [2024-11-26 17:48:16.622426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.727800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.727901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:51:16.289 [2024-11-26 17:48:16.727922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.502 ms 00:51:16.289 [2024-11-26 17:48:16.727956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.745699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:51:16.289 [2024-11-26 17:48:16.751040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.751259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:51:16.289 [2024-11-26 17:48:16.751293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:51:16.289 [2024-11-26 17:48:16.751306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.751491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.751525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:51:16.289 [2024-11-26 17:48:16.751538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:51:16.289 [2024-11-26 17:48:16.751550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.751685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.751700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:51:16.289 [2024-11-26 17:48:16.751713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:51:16.289 [2024-11-26 17:48:16.751723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.751752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.751765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:51:16.289 [2024-11-26 17:48:16.751776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:51:16.289 [2024-11-26 17:48:16.751787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.751838] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:51:16.289 [2024-11-26 17:48:16.751859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.751870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:51:16.289 [2024-11-26 17:48:16.751881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:51:16.289 [2024-11-26 17:48:16.751893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.792561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.792893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:51:16.289 [2024-11-26 17:48:16.792942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.707 ms 00:51:16.289 [2024-11-26 17:48:16.792977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.793126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:16.289 [2024-11-26 17:48:16.793142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:51:16.289 [2024-11-26 17:48:16.793155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:51:16.289 [2024-11-26 17:48:16.793167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:16.289 [2024-11-26 17:48:16.794980] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 457.352 ms, result 0 00:51:17.223  [2024-11-26T17:48:18.853Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-26T17:48:20.231Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-26T17:48:21.165Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-26T17:48:22.114Z] Copying: 99/1024 [MB] (24 MBps) [2024-11-26T17:48:23.053Z] Copying: 123/1024 [MB] (23 MBps) [2024-11-26T17:48:23.991Z] Copying: 146/1024 [MB] (23 MBps) [2024-11-26T17:48:24.929Z] Copying: 170/1024 [MB] (24 MBps) [2024-11-26T17:48:25.864Z] Copying: 195/1024 [MB] (24 MBps) [2024-11-26T17:48:26.797Z] Copying: 220/1024 [MB] (25 MBps) [2024-11-26T17:48:28.174Z] Copying: 246/1024 [MB] (25 MBps) [2024-11-26T17:48:29.111Z] Copying: 271/1024 [MB] (25 MBps) [2024-11-26T17:48:30.048Z] Copying: 296/1024 [MB] (24 MBps) [2024-11-26T17:48:30.985Z] Copying: 320/1024 [MB] (24 MBps) [2024-11-26T17:48:31.935Z] Copying: 344/1024 [MB] (23 MBps) [2024-11-26T17:48:32.889Z] Copying: 368/1024 [MB] (24 MBps) [2024-11-26T17:48:33.827Z] Copying: 393/1024 [MB] (24 MBps) [2024-11-26T17:48:35.226Z] Copying: 416/1024 [MB] (23 MBps) [2024-11-26T17:48:35.795Z] Copying: 441/1024 [MB] (24 MBps) [2024-11-26T17:48:37.173Z] Copying: 464/1024 [MB] (23 MBps) [2024-11-26T17:48:38.109Z] Copying: 487/1024 [MB] (22 MBps) [2024-11-26T17:48:39.046Z] Copying: 511/1024 [MB] (23 MBps) [2024-11-26T17:48:40.004Z] Copying: 535/1024 [MB] (23 MBps) [2024-11-26T17:48:40.957Z] Copying: 558/1024 [MB] (23 MBps) [2024-11-26T17:48:41.893Z] Copying: 581/1024 [MB] (22 MBps) [2024-11-26T17:48:42.829Z] Copying: 604/1024 [MB] (22 MBps) [2024-11-26T17:48:43.767Z] Copying: 628/1024 [MB] (24 MBps) [2024-11-26T17:48:45.141Z] Copying: 651/1024 [MB] (23 MBps) [2024-11-26T17:48:46.075Z] Copying: 676/1024 [MB] (24 MBps) [2024-11-26T17:48:47.009Z] Copying: 700/1024 [MB] (24 MBps) [2024-11-26T17:48:48.001Z] Copying: 725/1024 [MB] (24 MBps) [2024-11-26T17:48:48.937Z] Copying: 750/1024 [MB] (24 MBps) [2024-11-26T17:48:49.872Z] Copying: 775/1024 [MB] (25 MBps) [2024-11-26T17:48:50.808Z] Copying: 800/1024 [MB] (24 MBps) [2024-11-26T17:48:52.189Z] Copying: 824/1024 [MB] (24 MBps) [2024-11-26T17:48:52.758Z] Copying: 849/1024 [MB] (24 MBps) [2024-11-26T17:48:54.135Z] Copying: 875/1024 [MB] (25 MBps) [2024-11-26T17:48:55.071Z] Copying: 900/1024 [MB] (25 MBps) [2024-11-26T17:48:56.006Z] Copying: 926/1024 [MB] (25 MBps) [2024-11-26T17:48:57.014Z] Copying: 951/1024 [MB] (25 MBps) [2024-11-26T17:48:57.950Z] Copying: 978/1024 [MB] (26 MBps) [2024-11-26T17:48:58.888Z] Copying: 1004/1024 [MB] (26 MBps) [2024-11-26T17:48:58.888Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-26 17:48:58.518676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.194 [2024-11-26 17:48:58.518746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:51:58.194 [2024-11-26 17:48:58.518767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:51:58.194 [2024-11-26 17:48:58.518779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.194 [2024-11-26 17:48:58.518806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:51:58.194 [2024-11-26 17:48:58.523692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.194 [2024-11-26 17:48:58.523745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:51:58.194 [2024-11-26 17:48:58.523783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.872 ms 00:51:58.194 [2024-11-26 17:48:58.523795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.525792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.525836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:51:58.195 [2024-11-26 17:48:58.525851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:51:58.195 [2024-11-26 17:48:58.525862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.543660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.543723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:51:58.195 [2024-11-26 17:48:58.543741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.803 ms 00:51:58.195 [2024-11-26 17:48:58.543753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.548836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.549016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:51:58.195 [2024-11-26 17:48:58.549045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:51:58.195 [2024-11-26 17:48:58.549057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.590303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.590390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:51:58.195 [2024-11-26 17:48:58.590410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.203 ms 00:51:58.195 [2024-11-26 17:48:58.590423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.614143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.614228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:51:58.195 [2024-11-26 17:48:58.614250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.680 ms 00:51:58.195 [2024-11-26 17:48:58.614262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.614464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.614516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:51:58.195 [2024-11-26 17:48:58.614529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:51:58.195 [2024-11-26 17:48:58.614540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.655382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.655473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:51:58.195 [2024-11-26 17:48:58.655511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.878 ms 00:51:58.195 [2024-11-26 17:48:58.655550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.694996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.695316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:51:58.195 [2024-11-26 17:48:58.695345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.403 ms 00:51:58.195 [2024-11-26 17:48:58.695357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.734404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.734749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:51:58.195 [2024-11-26 17:48:58.734782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.027 ms 00:51:58.195 [2024-11-26 17:48:58.734795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.774852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.195 [2024-11-26 17:48:58.774936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:51:58.195 [2024-11-26 17:48:58.774955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.932 ms 00:51:58.195 [2024-11-26 17:48:58.774968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.195 [2024-11-26 17:48:58.775038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:51:58.195 [2024-11-26 17:48:58.775060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:51:58.195 [2024-11-26 17:48:58.775718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.775989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:51:58.196 [2024-11-26 17:48:58.776272] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:51:58.196 [2024-11-26 17:48:58.776293] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:51:58.196 [2024-11-26 17:48:58.776306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:51:58.196 [2024-11-26 17:48:58.776317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:51:58.196 [2024-11-26 17:48:58.776328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:51:58.196 [2024-11-26 17:48:58.776339] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:51:58.196 [2024-11-26 17:48:58.776350] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:51:58.196 [2024-11-26 17:48:58.776380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:51:58.196 [2024-11-26 17:48:58.776391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:51:58.196 [2024-11-26 17:48:58.776400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:51:58.196 [2024-11-26 17:48:58.776410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:51:58.196 [2024-11-26 17:48:58.776421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.196 [2024-11-26 17:48:58.776431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:51:58.196 [2024-11-26 17:48:58.776443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:51:58.196 [2024-11-26 17:48:58.776454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.797714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.196 [2024-11-26 17:48:58.797967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:51:58.196 [2024-11-26 17:48:58.797995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.216 ms 00:51:58.196 [2024-11-26 17:48:58.798009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.798635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:51:58.196 [2024-11-26 17:48:58.798652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:51:58.196 [2024-11-26 17:48:58.798665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:51:58.196 [2024-11-26 17:48:58.798694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.855756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.196 [2024-11-26 17:48:58.855844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:51:58.196 [2024-11-26 17:48:58.855862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.196 [2024-11-26 17:48:58.855875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.855980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.196 [2024-11-26 17:48:58.855994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:51:58.196 [2024-11-26 17:48:58.856005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.196 [2024-11-26 17:48:58.856027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.856140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.196 [2024-11-26 17:48:58.856155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:51:58.196 [2024-11-26 17:48:58.856167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.196 [2024-11-26 17:48:58.856178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.196 [2024-11-26 17:48:58.856198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.196 [2024-11-26 17:48:58.856210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:51:58.196 [2024-11-26 17:48:58.856221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.196 [2024-11-26 17:48:58.856231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.455 [2024-11-26 17:48:58.996808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.455 [2024-11-26 17:48:58.996905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:51:58.455 [2024-11-26 17:48:58.996925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.455 [2024-11-26 17:48:58.996938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.455 [2024-11-26 17:48:59.112582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.112678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:51:58.456 [2024-11-26 17:48:59.112696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.112734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.112875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.112890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:51:58.456 [2024-11-26 17:48:59.112902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.112914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.112973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.112985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:51:58.456 [2024-11-26 17:48:59.112997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.113007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.113156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.113170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:51:58.456 [2024-11-26 17:48:59.113182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.113193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.113235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.113248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:51:58.456 [2024-11-26 17:48:59.113259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.113270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.113321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.113341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:51:58.456 [2024-11-26 17:48:59.113352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.113363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.113416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:51:58.456 [2024-11-26 17:48:59.113428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:51:58.456 [2024-11-26 17:48:59.113439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:51:58.456 [2024-11-26 17:48:59.113449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:51:58.456 [2024-11-26 17:48:59.113625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.862 ms, result 0 00:52:00.360 00:52:00.360 00:52:00.360 17:49:00 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:52:00.360 [2024-11-26 17:49:00.851069] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:52:00.360 [2024-11-26 17:49:00.851272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79852 ] 00:52:00.360 [2024-11-26 17:49:01.051658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:00.618 [2024-11-26 17:49:01.203518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:01.187 [2024-11-26 17:49:01.667345] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:01.187 [2024-11-26 17:49:01.667443] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:01.187 [2024-11-26 17:49:01.833998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.834328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:52:01.187 [2024-11-26 17:49:01.834359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:52:01.187 [2024-11-26 17:49:01.834371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.834467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.834487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:01.187 [2024-11-26 17:49:01.834518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:52:01.187 [2024-11-26 17:49:01.834530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.834574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:52:01.187 [2024-11-26 17:49:01.835743] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:52:01.187 [2024-11-26 17:49:01.835772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.835786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:01.187 [2024-11-26 17:49:01.835798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:52:01.187 [2024-11-26 17:49:01.835810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.838331] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:52:01.187 [2024-11-26 17:49:01.859388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.859446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:52:01.187 [2024-11-26 17:49:01.859465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.091 ms 00:52:01.187 [2024-11-26 17:49:01.859477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.859592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.859608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:52:01.187 [2024-11-26 17:49:01.859621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:52:01.187 [2024-11-26 17:49:01.859632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.872969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.873224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:01.187 [2024-11-26 17:49:01.873253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.262 ms 00:52:01.187 [2024-11-26 17:49:01.873275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.873392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.873407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:01.187 [2024-11-26 17:49:01.873419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:52:01.187 [2024-11-26 17:49:01.873429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.873534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.873549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:52:01.187 [2024-11-26 17:49:01.873561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:52:01.187 [2024-11-26 17:49:01.873572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.873608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:52:01.187 [2024-11-26 17:49:01.879588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.879736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:01.187 [2024-11-26 17:49:01.879765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.998 ms 00:52:01.187 [2024-11-26 17:49:01.879777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.879817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.879829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:52:01.187 [2024-11-26 17:49:01.879840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:52:01.187 [2024-11-26 17:49:01.879851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.187 [2024-11-26 17:49:01.879894] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:52:01.187 [2024-11-26 17:49:01.879923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:52:01.187 [2024-11-26 17:49:01.879965] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:52:01.187 [2024-11-26 17:49:01.879990] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:52:01.187 [2024-11-26 17:49:01.880088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:52:01.187 [2024-11-26 17:49:01.880103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:52:01.187 [2024-11-26 17:49:01.880117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:52:01.187 [2024-11-26 17:49:01.880131] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:52:01.187 [2024-11-26 17:49:01.880144] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:52:01.187 [2024-11-26 17:49:01.880157] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:52:01.187 [2024-11-26 17:49:01.880168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:52:01.187 [2024-11-26 17:49:01.880183] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:52:01.187 [2024-11-26 17:49:01.880195] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:52:01.187 [2024-11-26 17:49:01.880207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.187 [2024-11-26 17:49:01.880217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:52:01.187 [2024-11-26 17:49:01.880229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:52:01.188 [2024-11-26 17:49:01.880239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.188 [2024-11-26 17:49:01.880312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.188 [2024-11-26 17:49:01.880324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:52:01.188 [2024-11-26 17:49:01.880335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:52:01.188 [2024-11-26 17:49:01.880345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.188 [2024-11-26 17:49:01.880455] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:52:01.188 [2024-11-26 17:49:01.880471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:52:01.188 [2024-11-26 17:49:01.880484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:01.188 [2024-11-26 17:49:01.880511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.188 [2024-11-26 17:49:01.880524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:52:01.188 [2024-11-26 17:49:01.880534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:52:01.188 [2024-11-26 17:49:01.880544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:52:01.188 [2024-11-26 17:49:01.880554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:52:01.188 [2024-11-26 17:49:01.880565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:52:01.188 [2024-11-26 17:49:01.880575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:01.188 [2024-11-26 17:49:01.880585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:52:01.188 [2024-11-26 17:49:01.880594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:52:01.447 [2024-11-26 17:49:01.880604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:01.447 [2024-11-26 17:49:01.880626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:52:01.447 [2024-11-26 17:49:01.880637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:52:01.447 [2024-11-26 17:49:01.880647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:52:01.447 [2024-11-26 17:49:01.880667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:52:01.447 [2024-11-26 17:49:01.880696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:52:01.447 [2024-11-26 17:49:01.880724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:52:01.447 [2024-11-26 17:49:01.880752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:52:01.447 [2024-11-26 17:49:01.880780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:52:01.447 [2024-11-26 17:49:01.880808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:01.447 [2024-11-26 17:49:01.880826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:52:01.447 [2024-11-26 17:49:01.880835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:52:01.447 [2024-11-26 17:49:01.880845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:01.447 [2024-11-26 17:49:01.880854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:52:01.447 [2024-11-26 17:49:01.880863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:52:01.447 [2024-11-26 17:49:01.880872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:52:01.447 [2024-11-26 17:49:01.880891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:52:01.447 [2024-11-26 17:49:01.880900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880908] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:52:01.447 [2024-11-26 17:49:01.880919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:52:01.447 [2024-11-26 17:49:01.880930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:01.447 [2024-11-26 17:49:01.880941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:01.447 [2024-11-26 17:49:01.880951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:52:01.447 [2024-11-26 17:49:01.880961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:52:01.447 [2024-11-26 17:49:01.880970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:52:01.448 [2024-11-26 17:49:01.880980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:52:01.448 [2024-11-26 17:49:01.880990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:52:01.448 [2024-11-26 17:49:01.880999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:52:01.448 [2024-11-26 17:49:01.881010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:52:01.448 [2024-11-26 17:49:01.881024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:52:01.448 [2024-11-26 17:49:01.881050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:52:01.448 [2024-11-26 17:49:01.881061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:52:01.448 [2024-11-26 17:49:01.881072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:52:01.448 [2024-11-26 17:49:01.881082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:52:01.448 [2024-11-26 17:49:01.881093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:52:01.448 [2024-11-26 17:49:01.881103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:52:01.448 [2024-11-26 17:49:01.881114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:52:01.448 [2024-11-26 17:49:01.881124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:52:01.448 [2024-11-26 17:49:01.881134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:52:01.448 [2024-11-26 17:49:01.881185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:52:01.448 [2024-11-26 17:49:01.881197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:01.448 [2024-11-26 17:49:01.881220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:52:01.448 [2024-11-26 17:49:01.881230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:52:01.448 [2024-11-26 17:49:01.881241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:52:01.448 [2024-11-26 17:49:01.881251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.881262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:52:01.448 [2024-11-26 17:49:01.881275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:52:01.448 [2024-11-26 17:49:01.881286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.932976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.933050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:01.448 [2024-11-26 17:49:01.933071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.712 ms 00:52:01.448 [2024-11-26 17:49:01.933088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.933220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.933232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:52:01.448 [2024-11-26 17:49:01.933244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:52:01.448 [2024-11-26 17:49:01.933255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.998114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.998198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:01.448 [2024-11-26 17:49:01.998218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.818 ms 00:52:01.448 [2024-11-26 17:49:01.998230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.998323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.998341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:01.448 [2024-11-26 17:49:01.998354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:52:01.448 [2024-11-26 17:49:01.998365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.999258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.999286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:01.448 [2024-11-26 17:49:01.999299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:52:01.448 [2024-11-26 17:49:01.999311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:01.999469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:01.999485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:01.448 [2024-11-26 17:49:01.999517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:52:01.448 [2024-11-26 17:49:01.999528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.024654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.024728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:01.448 [2024-11-26 17:49:02.024748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.138 ms 00:52:01.448 [2024-11-26 17:49:02.024760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.047697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:52:01.448 [2024-11-26 17:49:02.047762] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:52:01.448 [2024-11-26 17:49:02.047784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.047798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:52:01.448 [2024-11-26 17:49:02.047815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.862 ms 00:52:01.448 [2024-11-26 17:49:02.047826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.080710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.080789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:52:01.448 [2024-11-26 17:49:02.080810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.847 ms 00:52:01.448 [2024-11-26 17:49:02.080821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.102745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.102977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:52:01.448 [2024-11-26 17:49:02.103008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.845 ms 00:52:01.448 [2024-11-26 17:49:02.103019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.122426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.122636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:52:01.448 [2024-11-26 17:49:02.122666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.368 ms 00:52:01.448 [2024-11-26 17:49:02.122678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.448 [2024-11-26 17:49:02.123760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.448 [2024-11-26 17:49:02.123787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:52:01.448 [2024-11-26 17:49:02.123806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:52:01.448 [2024-11-26 17:49:02.123818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.230301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.230641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:52:01.708 [2024-11-26 17:49:02.230682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.618 ms 00:52:01.708 [2024-11-26 17:49:02.230695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.246737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:52:01.708 [2024-11-26 17:49:02.252082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.252127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:52:01.708 [2024-11-26 17:49:02.252147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.309 ms 00:52:01.708 [2024-11-26 17:49:02.252158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.252319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.252335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:52:01.708 [2024-11-26 17:49:02.252353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:52:01.708 [2024-11-26 17:49:02.252365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.252483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.252510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:52:01.708 [2024-11-26 17:49:02.252523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:52:01.708 [2024-11-26 17:49:02.252534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.252565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.252577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:52:01.708 [2024-11-26 17:49:02.252588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:52:01.708 [2024-11-26 17:49:02.252599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.252649] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:52:01.708 [2024-11-26 17:49:02.252662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.252674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:52:01.708 [2024-11-26 17:49:02.252685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:52:01.708 [2024-11-26 17:49:02.252696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.293186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.293465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:52:01.708 [2024-11-26 17:49:02.293661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.529 ms 00:52:01.708 [2024-11-26 17:49:02.293703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.293846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:01.708 [2024-11-26 17:49:02.293991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:52:01.708 [2024-11-26 17:49:02.294114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:52:01.708 [2024-11-26 17:49:02.294148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:01.708 [2024-11-26 17:49:02.295874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 461.997 ms, result 0 00:52:03.085  [2024-11-26T17:49:04.730Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T17:49:05.668Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-26T17:49:06.603Z] Copying: 81/1024 [MB] (27 MBps) [2024-11-26T17:49:07.539Z] Copying: 108/1024 [MB] (27 MBps) [2024-11-26T17:49:08.916Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-26T17:49:09.851Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-26T17:49:10.787Z] Copying: 191/1024 [MB] (27 MBps) [2024-11-26T17:49:11.723Z] Copying: 219/1024 [MB] (27 MBps) [2024-11-26T17:49:12.658Z] Copying: 245/1024 [MB] (26 MBps) [2024-11-26T17:49:13.618Z] Copying: 272/1024 [MB] (27 MBps) [2024-11-26T17:49:14.556Z] Copying: 300/1024 [MB] (27 MBps) [2024-11-26T17:49:15.934Z] Copying: 327/1024 [MB] (27 MBps) [2024-11-26T17:49:16.871Z] Copying: 356/1024 [MB] (28 MBps) [2024-11-26T17:49:17.807Z] Copying: 382/1024 [MB] (26 MBps) [2024-11-26T17:49:18.765Z] Copying: 409/1024 [MB] (26 MBps) [2024-11-26T17:49:19.704Z] Copying: 435/1024 [MB] (26 MBps) [2024-11-26T17:49:20.643Z] Copying: 462/1024 [MB] (26 MBps) [2024-11-26T17:49:21.582Z] Copying: 488/1024 [MB] (26 MBps) [2024-11-26T17:49:22.520Z] Copying: 515/1024 [MB] (26 MBps) [2024-11-26T17:49:23.913Z] Copying: 541/1024 [MB] (26 MBps) [2024-11-26T17:49:24.513Z] Copying: 568/1024 [MB] (27 MBps) [2024-11-26T17:49:25.891Z] Copying: 596/1024 [MB] (28 MBps) [2024-11-26T17:49:26.829Z] Copying: 623/1024 [MB] (26 MBps) [2024-11-26T17:49:27.765Z] Copying: 649/1024 [MB] (26 MBps) [2024-11-26T17:49:28.701Z] Copying: 676/1024 [MB] (27 MBps) [2024-11-26T17:49:29.638Z] Copying: 702/1024 [MB] (26 MBps) [2024-11-26T17:49:30.573Z] Copying: 729/1024 [MB] (26 MBps) [2024-11-26T17:49:31.510Z] Copying: 755/1024 [MB] (26 MBps) [2024-11-26T17:49:32.889Z] Copying: 782/1024 [MB] (26 MBps) [2024-11-26T17:49:33.823Z] Copying: 808/1024 [MB] (26 MBps) [2024-11-26T17:49:34.835Z] Copying: 834/1024 [MB] (26 MBps) [2024-11-26T17:49:35.770Z] Copying: 860/1024 [MB] (26 MBps) [2024-11-26T17:49:36.707Z] Copying: 886/1024 [MB] (25 MBps) [2024-11-26T17:49:37.645Z] Copying: 913/1024 [MB] (26 MBps) [2024-11-26T17:49:38.579Z] Copying: 940/1024 [MB] (27 MBps) [2024-11-26T17:49:39.514Z] Copying: 968/1024 [MB] (28 MBps) [2024-11-26T17:49:40.893Z] Copying: 995/1024 [MB] (26 MBps) [2024-11-26T17:49:40.893Z] Copying: 1021/1024 [MB] (26 MBps) [2024-11-26T17:49:40.893Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 17:49:40.626853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.626965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:52:40.199 [2024-11-26 17:49:40.627244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:52:40.199 [2024-11-26 17:49:40.627265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.199 [2024-11-26 17:49:40.627310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:52:40.199 [2024-11-26 17:49:40.635125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.635188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:52:40.199 [2024-11-26 17:49:40.635211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.797 ms 00:52:40.199 [2024-11-26 17:49:40.635229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.199 [2024-11-26 17:49:40.635612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.635636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:52:40.199 [2024-11-26 17:49:40.635655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:52:40.199 [2024-11-26 17:49:40.635674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.199 [2024-11-26 17:49:40.639785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.639811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:52:40.199 [2024-11-26 17:49:40.639825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.091 ms 00:52:40.199 [2024-11-26 17:49:40.639844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.199 [2024-11-26 17:49:40.645876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.646050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:52:40.199 [2024-11-26 17:49:40.646076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.017 ms 00:52:40.199 [2024-11-26 17:49:40.646091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.199 [2024-11-26 17:49:40.684064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.199 [2024-11-26 17:49:40.684243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:52:40.199 [2024-11-26 17:49:40.684266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.940 ms 00:52:40.199 [2024-11-26 17:49:40.684278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.706315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.706355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:52:40.200 [2024-11-26 17:49:40.706381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.031 ms 00:52:40.200 [2024-11-26 17:49:40.706407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.706575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.706590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:52:40.200 [2024-11-26 17:49:40.706602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:52:40.200 [2024-11-26 17:49:40.706628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.743741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.743780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:52:40.200 [2024-11-26 17:49:40.743796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.155 ms 00:52:40.200 [2024-11-26 17:49:40.743807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.780732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.780909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:52:40.200 [2024-11-26 17:49:40.780932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.945 ms 00:52:40.200 [2024-11-26 17:49:40.780943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.817600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.817640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:52:40.200 [2024-11-26 17:49:40.817656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.641 ms 00:52:40.200 [2024-11-26 17:49:40.817667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.854018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.200 [2024-11-26 17:49:40.854059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:52:40.200 [2024-11-26 17:49:40.854074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.320 ms 00:52:40.200 [2024-11-26 17:49:40.854084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.200 [2024-11-26 17:49:40.854124] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:52:40.200 [2024-11-26 17:49:40.854150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:52:40.200 [2024-11-26 17:49:40.854906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.854992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:52:40.201 [2024-11-26 17:49:40.855327] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:52:40.201 [2024-11-26 17:49:40.855337] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:52:40.201 [2024-11-26 17:49:40.855349] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:52:40.201 [2024-11-26 17:49:40.855359] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:52:40.201 [2024-11-26 17:49:40.855369] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:52:40.201 [2024-11-26 17:49:40.855386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:52:40.201 [2024-11-26 17:49:40.855410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:52:40.201 [2024-11-26 17:49:40.855421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:52:40.201 [2024-11-26 17:49:40.855432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:52:40.201 [2024-11-26 17:49:40.855441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:52:40.201 [2024-11-26 17:49:40.855450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:52:40.201 [2024-11-26 17:49:40.855461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.201 [2024-11-26 17:49:40.855472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:52:40.201 [2024-11-26 17:49:40.855483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:52:40.201 [2024-11-26 17:49:40.855508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.201 [2024-11-26 17:49:40.877052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.201 [2024-11-26 17:49:40.877087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:52:40.201 [2024-11-26 17:49:40.877102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.539 ms 00:52:40.201 [2024-11-26 17:49:40.877114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.201 [2024-11-26 17:49:40.877705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:40.201 [2024-11-26 17:49:40.877719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:52:40.201 [2024-11-26 17:49:40.877737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:52:40.201 [2024-11-26 17:49:40.877748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.460 [2024-11-26 17:49:40.933464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.460 [2024-11-26 17:49:40.933545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:40.460 [2024-11-26 17:49:40.933586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.460 [2024-11-26 17:49:40.933597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.460 [2024-11-26 17:49:40.933685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.460 [2024-11-26 17:49:40.933697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:40.460 [2024-11-26 17:49:40.933715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.460 [2024-11-26 17:49:40.933726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.460 [2024-11-26 17:49:40.933808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.460 [2024-11-26 17:49:40.933821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:40.460 [2024-11-26 17:49:40.933832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.460 [2024-11-26 17:49:40.933842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.460 [2024-11-26 17:49:40.933862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.460 [2024-11-26 17:49:40.933874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:40.460 [2024-11-26 17:49:40.933885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.460 [2024-11-26 17:49:40.933900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.460 [2024-11-26 17:49:41.073242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.460 [2024-11-26 17:49:41.073598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:40.460 [2024-11-26 17:49:41.073627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.460 [2024-11-26 17:49:41.073639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.182318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.182394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:40.720 [2024-11-26 17:49:41.182419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.182431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.182581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.182596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:40.720 [2024-11-26 17:49:41.182608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.182619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.182671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.182684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:40.720 [2024-11-26 17:49:41.182695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.182705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.182849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.182863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:40.720 [2024-11-26 17:49:41.182874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.182885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.182924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.182937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:52:40.720 [2024-11-26 17:49:41.182948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.182959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.183011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.183023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:40.720 [2024-11-26 17:49:41.183035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.183045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.183094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:52:40.720 [2024-11-26 17:49:41.183107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:40.720 [2024-11-26 17:49:41.183118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:52:40.720 [2024-11-26 17:49:41.183128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:40.720 [2024-11-26 17:49:41.183280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.300 ms, result 0 00:52:41.658 00:52:41.658 00:52:41.658 17:49:42 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:52:43.564 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:52:43.564 17:49:44 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:52:43.564 [2024-11-26 17:49:44.228528] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:52:43.564 [2024-11-26 17:49:44.228674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80288 ] 00:52:43.822 [2024-11-26 17:49:44.412485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:44.081 [2024-11-26 17:49:44.550571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:44.342 [2024-11-26 17:49:44.977030] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:44.342 [2024-11-26 17:49:44.977274] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:52:44.602 [2024-11-26 17:49:45.143846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.144181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:52:44.602 [2024-11-26 17:49:45.144227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:52:44.602 [2024-11-26 17:49:45.144243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.144351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.144372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:52:44.602 [2024-11-26 17:49:45.144385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:52:44.602 [2024-11-26 17:49:45.144397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.144426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:52:44.602 [2024-11-26 17:49:45.145638] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:52:44.602 [2024-11-26 17:49:45.145674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.145687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:52:44.602 [2024-11-26 17:49:45.145699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:52:44.602 [2024-11-26 17:49:45.145710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.148248] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:52:44.602 [2024-11-26 17:49:45.169932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.170030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:52:44.602 [2024-11-26 17:49:45.170052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.714 ms 00:52:44.602 [2024-11-26 17:49:45.170067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.170216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.170235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:52:44.602 [2024-11-26 17:49:45.170250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:52:44.602 [2024-11-26 17:49:45.170264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.184860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.184947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:52:44.602 [2024-11-26 17:49:45.184966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.473 ms 00:52:44.602 [2024-11-26 17:49:45.184989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.185125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.185143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:52:44.602 [2024-11-26 17:49:45.185155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:52:44.602 [2024-11-26 17:49:45.185167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.602 [2024-11-26 17:49:45.185280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.602 [2024-11-26 17:49:45.185293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:52:44.602 [2024-11-26 17:49:45.185305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:52:44.602 [2024-11-26 17:49:45.185316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.603 [2024-11-26 17:49:45.185356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:52:44.603 [2024-11-26 17:49:45.191690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.603 [2024-11-26 17:49:45.191752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:52:44.603 [2024-11-26 17:49:45.191775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.355 ms 00:52:44.603 [2024-11-26 17:49:45.191786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.603 [2024-11-26 17:49:45.191843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.603 [2024-11-26 17:49:45.191854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:52:44.603 [2024-11-26 17:49:45.191866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:52:44.603 [2024-11-26 17:49:45.191876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.603 [2024-11-26 17:49:45.191945] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:52:44.603 [2024-11-26 17:49:45.191976] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:52:44.603 [2024-11-26 17:49:45.192018] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:52:44.603 [2024-11-26 17:49:45.192044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:52:44.603 [2024-11-26 17:49:45.192141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:52:44.603 [2024-11-26 17:49:45.192155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:52:44.603 [2024-11-26 17:49:45.192170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:52:44.603 [2024-11-26 17:49:45.192185] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192199] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192211] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:52:44.603 [2024-11-26 17:49:45.192222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:52:44.603 [2024-11-26 17:49:45.192239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:52:44.603 [2024-11-26 17:49:45.192250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:52:44.603 [2024-11-26 17:49:45.192262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.603 [2024-11-26 17:49:45.192273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:52:44.603 [2024-11-26 17:49:45.192284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:52:44.603 [2024-11-26 17:49:45.192295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.603 [2024-11-26 17:49:45.192377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.603 [2024-11-26 17:49:45.192389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:52:44.603 [2024-11-26 17:49:45.192400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:52:44.603 [2024-11-26 17:49:45.192411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.603 [2024-11-26 17:49:45.192539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:52:44.603 [2024-11-26 17:49:45.192558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:52:44.603 [2024-11-26 17:49:45.192570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:52:44.603 [2024-11-26 17:49:45.192603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:52:44.603 [2024-11-26 17:49:45.192654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:44.603 [2024-11-26 17:49:45.192676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:52:44.603 [2024-11-26 17:49:45.192687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:52:44.603 [2024-11-26 17:49:45.192696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:52:44.603 [2024-11-26 17:49:45.192721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:52:44.603 [2024-11-26 17:49:45.192731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:52:44.603 [2024-11-26 17:49:45.192741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:52:44.603 [2024-11-26 17:49:45.192760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:52:44.603 [2024-11-26 17:49:45.192791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:52:44.603 [2024-11-26 17:49:45.192819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:52:44.603 [2024-11-26 17:49:45.192845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:52:44.603 [2024-11-26 17:49:45.192873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:52:44.603 [2024-11-26 17:49:45.192891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:52:44.603 [2024-11-26 17:49:45.192900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:44.603 [2024-11-26 17:49:45.192918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:52:44.603 [2024-11-26 17:49:45.192927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:52:44.603 [2024-11-26 17:49:45.192936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:52:44.603 [2024-11-26 17:49:45.192944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:52:44.603 [2024-11-26 17:49:45.192953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:52:44.603 [2024-11-26 17:49:45.192962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.192971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:52:44.603 [2024-11-26 17:49:45.192980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:52:44.603 [2024-11-26 17:49:45.192992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.193001] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:52:44.603 [2024-11-26 17:49:45.193012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:52:44.603 [2024-11-26 17:49:45.193022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:52:44.603 [2024-11-26 17:49:45.193031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:52:44.603 [2024-11-26 17:49:45.193042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:52:44.603 [2024-11-26 17:49:45.193052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:52:44.603 [2024-11-26 17:49:45.193062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:52:44.603 [2024-11-26 17:49:45.193072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:52:44.603 [2024-11-26 17:49:45.193081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:52:44.603 [2024-11-26 17:49:45.193091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:52:44.603 [2024-11-26 17:49:45.193102] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:52:44.603 [2024-11-26 17:49:45.193115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:44.603 [2024-11-26 17:49:45.193132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:52:44.603 [2024-11-26 17:49:45.193142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:52:44.603 [2024-11-26 17:49:45.193153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:52:44.603 [2024-11-26 17:49:45.193164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:52:44.603 [2024-11-26 17:49:45.193175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:52:44.603 [2024-11-26 17:49:45.193186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:52:44.603 [2024-11-26 17:49:45.193196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:52:44.603 [2024-11-26 17:49:45.193208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:52:44.603 [2024-11-26 17:49:45.193220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:52:44.603 [2024-11-26 17:49:45.193231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:52:44.603 [2024-11-26 17:49:45.193242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:52:44.603 [2024-11-26 17:49:45.193252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:52:44.603 [2024-11-26 17:49:45.193263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:52:44.603 [2024-11-26 17:49:45.193275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:52:44.604 [2024-11-26 17:49:45.193286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:52:44.604 [2024-11-26 17:49:45.193297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:52:44.604 [2024-11-26 17:49:45.193309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:52:44.604 [2024-11-26 17:49:45.193320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:52:44.604 [2024-11-26 17:49:45.193330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:52:44.604 [2024-11-26 17:49:45.193342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:52:44.604 [2024-11-26 17:49:45.193354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.604 [2024-11-26 17:49:45.193365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:52:44.604 [2024-11-26 17:49:45.193377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:52:44.604 [2024-11-26 17:49:45.193387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.604 [2024-11-26 17:49:45.243979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.604 [2024-11-26 17:49:45.244335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:52:44.604 [2024-11-26 17:49:45.244370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.609 ms 00:52:44.604 [2024-11-26 17:49:45.244390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.604 [2024-11-26 17:49:45.244543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.604 [2024-11-26 17:49:45.244557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:52:44.604 [2024-11-26 17:49:45.244570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:52:44.604 [2024-11-26 17:49:45.244581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.310132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.310214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:52:44.863 [2024-11-26 17:49:45.310233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.514 ms 00:52:44.863 [2024-11-26 17:49:45.310245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.310338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.310356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:52:44.863 [2024-11-26 17:49:45.310368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:52:44.863 [2024-11-26 17:49:45.310379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.311287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.311313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:52:44.863 [2024-11-26 17:49:45.311326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:52:44.863 [2024-11-26 17:49:45.311337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.311504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.311520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:52:44.863 [2024-11-26 17:49:45.311539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:52:44.863 [2024-11-26 17:49:45.311550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.335599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.335681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:52:44.863 [2024-11-26 17:49:45.335701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.057 ms 00:52:44.863 [2024-11-26 17:49:45.335712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.358382] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:52:44.863 [2024-11-26 17:49:45.358482] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:52:44.863 [2024-11-26 17:49:45.358525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.358546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:52:44.863 [2024-11-26 17:49:45.358566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.645 ms 00:52:44.863 [2024-11-26 17:49:45.358577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.392299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.392405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:52:44.863 [2024-11-26 17:49:45.392425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.655 ms 00:52:44.863 [2024-11-26 17:49:45.392438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.413992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.414347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:52:44.863 [2024-11-26 17:49:45.414377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.405 ms 00:52:44.863 [2024-11-26 17:49:45.414390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.436098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.436192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:52:44.863 [2024-11-26 17:49:45.436213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.636 ms 00:52:44.863 [2024-11-26 17:49:45.436225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.437195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.437232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:52:44.863 [2024-11-26 17:49:45.437253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:52:44.863 [2024-11-26 17:49:45.437264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:44.863 [2024-11-26 17:49:45.545074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:44.863 [2024-11-26 17:49:45.545181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:52:44.863 [2024-11-26 17:49:45.545212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.944 ms 00:52:44.863 [2024-11-26 17:49:45.545224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.563100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:52:45.122 [2024-11-26 17:49:45.568929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.569000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:52:45.122 [2024-11-26 17:49:45.569021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.629 ms 00:52:45.122 [2024-11-26 17:49:45.569033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.569212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.569228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:52:45.122 [2024-11-26 17:49:45.569246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:52:45.122 [2024-11-26 17:49:45.569258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.569377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.569392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:52:45.122 [2024-11-26 17:49:45.569404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:52:45.122 [2024-11-26 17:49:45.569415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.569444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.569456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:52:45.122 [2024-11-26 17:49:45.569467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:52:45.122 [2024-11-26 17:49:45.569478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.569546] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:52:45.122 [2024-11-26 17:49:45.569562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.569574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:52:45.122 [2024-11-26 17:49:45.569585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:52:45.122 [2024-11-26 17:49:45.569596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.613655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.613784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:52:45.122 [2024-11-26 17:49:45.613826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.094 ms 00:52:45.122 [2024-11-26 17:49:45.613843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.614007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:52:45.122 [2024-11-26 17:49:45.614026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:52:45.122 [2024-11-26 17:49:45.614041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:52:45.122 [2024-11-26 17:49:45.614058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:52:45.122 [2024-11-26 17:49:45.616025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 472.311 ms, result 0 00:52:46.087  [2024-11-26T17:49:47.735Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-26T17:49:48.670Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-26T17:49:50.049Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-26T17:49:50.989Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-26T17:49:51.927Z] Copying: 122/1024 [MB] (24 MBps) [2024-11-26T17:49:52.869Z] Copying: 147/1024 [MB] (24 MBps) [2024-11-26T17:49:53.808Z] Copying: 171/1024 [MB] (24 MBps) [2024-11-26T17:49:54.747Z] Copying: 196/1024 [MB] (24 MBps) [2024-11-26T17:49:55.687Z] Copying: 220/1024 [MB] (24 MBps) [2024-11-26T17:49:56.624Z] Copying: 244/1024 [MB] (24 MBps) [2024-11-26T17:49:58.002Z] Copying: 269/1024 [MB] (24 MBps) [2024-11-26T17:49:58.941Z] Copying: 295/1024 [MB] (25 MBps) [2024-11-26T17:49:59.878Z] Copying: 319/1024 [MB] (24 MBps) [2024-11-26T17:50:00.913Z] Copying: 344/1024 [MB] (24 MBps) [2024-11-26T17:50:01.849Z] Copying: 368/1024 [MB] (24 MBps) [2024-11-26T17:50:02.788Z] Copying: 393/1024 [MB] (24 MBps) [2024-11-26T17:50:03.727Z] Copying: 417/1024 [MB] (24 MBps) [2024-11-26T17:50:04.664Z] Copying: 442/1024 [MB] (24 MBps) [2024-11-26T17:50:05.601Z] Copying: 466/1024 [MB] (24 MBps) [2024-11-26T17:50:06.978Z] Copying: 491/1024 [MB] (24 MBps) [2024-11-26T17:50:07.915Z] Copying: 515/1024 [MB] (24 MBps) [2024-11-26T17:50:08.853Z] Copying: 540/1024 [MB] (24 MBps) [2024-11-26T17:50:09.789Z] Copying: 564/1024 [MB] (23 MBps) [2024-11-26T17:50:10.726Z] Copying: 589/1024 [MB] (24 MBps) [2024-11-26T17:50:11.663Z] Copying: 613/1024 [MB] (24 MBps) [2024-11-26T17:50:12.600Z] Copying: 638/1024 [MB] (24 MBps) [2024-11-26T17:50:13.980Z] Copying: 662/1024 [MB] (23 MBps) [2024-11-26T17:50:14.917Z] Copying: 686/1024 [MB] (24 MBps) [2024-11-26T17:50:15.853Z] Copying: 710/1024 [MB] (24 MBps) [2024-11-26T17:50:16.800Z] Copying: 735/1024 [MB] (24 MBps) [2024-11-26T17:50:17.734Z] Copying: 760/1024 [MB] (24 MBps) [2024-11-26T17:50:18.668Z] Copying: 785/1024 [MB] (24 MBps) [2024-11-26T17:50:19.606Z] Copying: 810/1024 [MB] (24 MBps) [2024-11-26T17:50:20.981Z] Copying: 835/1024 [MB] (25 MBps) [2024-11-26T17:50:21.914Z] Copying: 861/1024 [MB] (25 MBps) [2024-11-26T17:50:22.894Z] Copying: 885/1024 [MB] (24 MBps) [2024-11-26T17:50:23.846Z] Copying: 909/1024 [MB] (24 MBps) [2024-11-26T17:50:24.779Z] Copying: 933/1024 [MB] (24 MBps) [2024-11-26T17:50:25.711Z] Copying: 958/1024 [MB] (24 MBps) [2024-11-26T17:50:26.645Z] Copying: 982/1024 [MB] (24 MBps) [2024-11-26T17:50:27.581Z] Copying: 1008/1024 [MB] (25 MBps) [2024-11-26T17:50:28.149Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-26T17:50:28.149Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-26 17:50:27.947721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:27.947794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:53:27.455 [2024-11-26 17:50:27.947827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:53:27.455 [2024-11-26 17:50:27.947839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:27.949471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:53:27.455 [2024-11-26 17:50:27.955246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:27.955402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:53:27.455 [2024-11-26 17:50:27.955425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.734 ms 00:53:27.455 [2024-11-26 17:50:27.955436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:27.966740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:27.966781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:53:27.455 [2024-11-26 17:50:27.966796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.493 ms 00:53:27.455 [2024-11-26 17:50:27.966817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:27.991250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:27.991305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:53:27.455 [2024-11-26 17:50:27.991320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.452 ms 00:53:27.455 [2024-11-26 17:50:27.991331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:27.996397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:27.996430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:53:27.455 [2024-11-26 17:50:27.996443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:53:27.455 [2024-11-26 17:50:27.996461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:28.034923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.455 [2024-11-26 17:50:28.034962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:53:27.455 [2024-11-26 17:50:28.034976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.450 ms 00:53:27.455 [2024-11-26 17:50:28.034987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.455 [2024-11-26 17:50:28.056451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.456 [2024-11-26 17:50:28.056488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:53:27.456 [2024-11-26 17:50:28.056514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.459 ms 00:53:27.456 [2024-11-26 17:50:28.056526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.172709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.716 [2024-11-26 17:50:28.172804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:53:27.716 [2024-11-26 17:50:28.172824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.324 ms 00:53:27.716 [2024-11-26 17:50:28.172838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.210987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.716 [2024-11-26 17:50:28.211031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:53:27.716 [2024-11-26 17:50:28.211047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.191 ms 00:53:27.716 [2024-11-26 17:50:28.211059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.247055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.716 [2024-11-26 17:50:28.247091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:53:27.716 [2024-11-26 17:50:28.247106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.014 ms 00:53:27.716 [2024-11-26 17:50:28.247116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.282580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.716 [2024-11-26 17:50:28.282616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:53:27.716 [2024-11-26 17:50:28.282629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.482 ms 00:53:27.716 [2024-11-26 17:50:28.282641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.317780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.716 [2024-11-26 17:50:28.317933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:53:27.716 [2024-11-26 17:50:28.317954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.118 ms 00:53:27.716 [2024-11-26 17:50:28.317966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.716 [2024-11-26 17:50:28.318070] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:53:27.716 [2024-11-26 17:50:28.318091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111360 / 261120 wr_cnt: 1 state: open 00:53:27.716 [2024-11-26 17:50:28.318105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:53:27.716 [2024-11-26 17:50:28.318267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.318994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:53:27.717 [2024-11-26 17:50:28.319248] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:53:27.717 [2024-11-26 17:50:28.319260] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:53:27.717 [2024-11-26 17:50:28.319272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111360 00:53:27.717 [2024-11-26 17:50:28.319282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112320 00:53:27.718 [2024-11-26 17:50:28.319293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111360 00:53:27.718 [2024-11-26 17:50:28.319304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:53:27.718 [2024-11-26 17:50:28.319332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:53:27.718 [2024-11-26 17:50:28.319343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:53:27.718 [2024-11-26 17:50:28.319354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:53:27.718 [2024-11-26 17:50:28.319363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:53:27.718 [2024-11-26 17:50:28.319381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:53:27.718 [2024-11-26 17:50:28.319391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.718 [2024-11-26 17:50:28.319402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:53:27.718 [2024-11-26 17:50:28.319413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:53:27.718 [2024-11-26 17:50:28.319424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.340354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.718 [2024-11-26 17:50:28.340387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:53:27.718 [2024-11-26 17:50:28.340407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.927 ms 00:53:27.718 [2024-11-26 17:50:28.340419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.341064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:27.718 [2024-11-26 17:50:28.341080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:53:27.718 [2024-11-26 17:50:28.341092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:53:27.718 [2024-11-26 17:50:28.341103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.398179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.718 [2024-11-26 17:50:28.398225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:53:27.718 [2024-11-26 17:50:28.398240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.718 [2024-11-26 17:50:28.398252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.398329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.718 [2024-11-26 17:50:28.398342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:53:27.718 [2024-11-26 17:50:28.398353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.718 [2024-11-26 17:50:28.398363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.398455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.718 [2024-11-26 17:50:28.398475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:53:27.718 [2024-11-26 17:50:28.398486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.718 [2024-11-26 17:50:28.398496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.718 [2024-11-26 17:50:28.398530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.718 [2024-11-26 17:50:28.398559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:53:27.718 [2024-11-26 17:50:28.398570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.718 [2024-11-26 17:50:28.398581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.537914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.537996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:53:27.977 [2024-11-26 17:50:28.538014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.538026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.643930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:53:27.977 [2024-11-26 17:50:28.644023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:53:27.977 [2024-11-26 17:50:28.644188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:53:27.977 [2024-11-26 17:50:28.644285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:53:27.977 [2024-11-26 17:50:28.644466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:53:27.977 [2024-11-26 17:50:28.644584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:53:27.977 [2024-11-26 17:50:28.644669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:53:27.977 [2024-11-26 17:50:28.644751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:53:27.977 [2024-11-26 17:50:28.644762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:53:27.977 [2024-11-26 17:50:28.644773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:27.977 [2024-11-26 17:50:28.644951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 701.397 ms, result 0 00:53:29.881 00:53:29.881 00:53:29.881 17:50:30 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:53:29.881 [2024-11-26 17:50:30.384288] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:53:29.881 [2024-11-26 17:50:30.384444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80740 ] 00:53:29.881 [2024-11-26 17:50:30.570018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:30.140 [2024-11-26 17:50:30.720308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:30.732 [2024-11-26 17:50:31.162845] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:53:30.732 [2024-11-26 17:50:31.163176] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:53:30.732 [2024-11-26 17:50:31.328783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.329060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:53:30.732 [2024-11-26 17:50:31.329090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:53:30.732 [2024-11-26 17:50:31.329102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.329175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.329192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:53:30.732 [2024-11-26 17:50:31.329205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:53:30.732 [2024-11-26 17:50:31.329215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.329241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:53:30.732 [2024-11-26 17:50:31.330190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:53:30.732 [2024-11-26 17:50:31.330224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.330235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:53:30.732 [2024-11-26 17:50:31.330247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:53:30.732 [2024-11-26 17:50:31.330258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.332801] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:53:30.732 [2024-11-26 17:50:31.353384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.353428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:53:30.732 [2024-11-26 17:50:31.353445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.618 ms 00:53:30.732 [2024-11-26 17:50:31.353456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.353543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.353558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:53:30.732 [2024-11-26 17:50:31.353571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:53:30.732 [2024-11-26 17:50:31.353582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.366018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.366050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:53:30.732 [2024-11-26 17:50:31.366066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.379 ms 00:53:30.732 [2024-11-26 17:50:31.366082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.366172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.366187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:53:30.732 [2024-11-26 17:50:31.366199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:53:30.732 [2024-11-26 17:50:31.366209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.366271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.366284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:53:30.732 [2024-11-26 17:50:31.366295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:53:30.732 [2024-11-26 17:50:31.366306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.732 [2024-11-26 17:50:31.366340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:53:30.732 [2024-11-26 17:50:31.372128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.732 [2024-11-26 17:50:31.372302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:53:30.733 [2024-11-26 17:50:31.372331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:53:30.733 [2024-11-26 17:50:31.372343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.733 [2024-11-26 17:50:31.372381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.733 [2024-11-26 17:50:31.372393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:53:30.733 [2024-11-26 17:50:31.372404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:53:30.733 [2024-11-26 17:50:31.372414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.733 [2024-11-26 17:50:31.372455] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:53:30.733 [2024-11-26 17:50:31.372483] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:53:30.733 [2024-11-26 17:50:31.372539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:53:30.733 [2024-11-26 17:50:31.372563] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:53:30.733 [2024-11-26 17:50:31.372659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:53:30.733 [2024-11-26 17:50:31.372673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:53:30.733 [2024-11-26 17:50:31.372688] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:53:30.733 [2024-11-26 17:50:31.372703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:53:30.733 [2024-11-26 17:50:31.372715] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:53:30.733 [2024-11-26 17:50:31.372727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:53:30.733 [2024-11-26 17:50:31.372739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:53:30.733 [2024-11-26 17:50:31.372752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:53:30.733 [2024-11-26 17:50:31.372764] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:53:30.733 [2024-11-26 17:50:31.372775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.733 [2024-11-26 17:50:31.372786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:53:30.733 [2024-11-26 17:50:31.372797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:53:30.733 [2024-11-26 17:50:31.372808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.733 [2024-11-26 17:50:31.372881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.733 [2024-11-26 17:50:31.372892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:53:30.733 [2024-11-26 17:50:31.372903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:53:30.733 [2024-11-26 17:50:31.372913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.733 [2024-11-26 17:50:31.373018] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:53:30.733 [2024-11-26 17:50:31.373035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:53:30.733 [2024-11-26 17:50:31.373047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:53:30.733 [2024-11-26 17:50:31.373079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:53:30.733 [2024-11-26 17:50:31.373109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:53:30.733 [2024-11-26 17:50:31.373131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:53:30.733 [2024-11-26 17:50:31.373140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:53:30.733 [2024-11-26 17:50:31.373150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:53:30.733 [2024-11-26 17:50:31.373172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:53:30.733 [2024-11-26 17:50:31.373182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:53:30.733 [2024-11-26 17:50:31.373191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:53:30.733 [2024-11-26 17:50:31.373211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:53:30.733 [2024-11-26 17:50:31.373241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:53:30.733 [2024-11-26 17:50:31.373269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:53:30.733 [2024-11-26 17:50:31.373297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:53:30.733 [2024-11-26 17:50:31.373325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:53:30.733 [2024-11-26 17:50:31.373352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:53:30.733 [2024-11-26 17:50:31.373370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:53:30.733 [2024-11-26 17:50:31.373379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:53:30.733 [2024-11-26 17:50:31.373388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:53:30.733 [2024-11-26 17:50:31.373397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:53:30.733 [2024-11-26 17:50:31.373406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:53:30.733 [2024-11-26 17:50:31.373415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:53:30.733 [2024-11-26 17:50:31.373434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:53:30.733 [2024-11-26 17:50:31.373444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373455] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:53:30.733 [2024-11-26 17:50:31.373465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:53:30.733 [2024-11-26 17:50:31.373475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:53:30.733 [2024-11-26 17:50:31.373510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:53:30.733 [2024-11-26 17:50:31.373522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:53:30.733 [2024-11-26 17:50:31.373532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:53:30.733 [2024-11-26 17:50:31.373542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:53:30.733 [2024-11-26 17:50:31.373552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:53:30.733 [2024-11-26 17:50:31.373561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:53:30.733 [2024-11-26 17:50:31.373573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:53:30.733 [2024-11-26 17:50:31.373585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:53:30.733 [2024-11-26 17:50:31.373613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:53:30.733 [2024-11-26 17:50:31.373623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:53:30.733 [2024-11-26 17:50:31.373634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:53:30.733 [2024-11-26 17:50:31.373646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:53:30.733 [2024-11-26 17:50:31.373656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:53:30.733 [2024-11-26 17:50:31.373667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:53:30.733 [2024-11-26 17:50:31.373678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:53:30.733 [2024-11-26 17:50:31.373689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:53:30.733 [2024-11-26 17:50:31.373700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:53:30.733 [2024-11-26 17:50:31.373763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:53:30.733 [2024-11-26 17:50:31.373774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:53:30.733 [2024-11-26 17:50:31.373785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:53:30.734 [2024-11-26 17:50:31.373796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:53:30.734 [2024-11-26 17:50:31.373806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:53:30.734 [2024-11-26 17:50:31.373819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:53:30.734 [2024-11-26 17:50:31.373830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.734 [2024-11-26 17:50:31.373842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:53:30.734 [2024-11-26 17:50:31.373852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:53:30.734 [2024-11-26 17:50:31.373863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.426759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.426959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:53:30.994 [2024-11-26 17:50:31.426986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.924 ms 00:53:30.994 [2024-11-26 17:50:31.427007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.427116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.427129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:53:30.994 [2024-11-26 17:50:31.427140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:53:30.994 [2024-11-26 17:50:31.427151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.495672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.495722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:53:30.994 [2024-11-26 17:50:31.495739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.512 ms 00:53:30.994 [2024-11-26 17:50:31.495767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.495840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.495858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:53:30.994 [2024-11-26 17:50:31.495871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:53:30.994 [2024-11-26 17:50:31.495882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.496808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.496863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:53:30.994 [2024-11-26 17:50:31.496899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:53:30.994 [2024-11-26 17:50:31.496932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.497196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.497305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:53:30.994 [2024-11-26 17:50:31.497389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:53:30.994 [2024-11-26 17:50:31.497426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.520756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.520918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:53:30.994 [2024-11-26 17:50:31.521000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.317 ms 00:53:30.994 [2024-11-26 17:50:31.521040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.542050] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:53:30.994 [2024-11-26 17:50:31.542207] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:53:30.994 [2024-11-26 17:50:31.542361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.542397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:53:30.994 [2024-11-26 17:50:31.542429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.191 ms 00:53:30.994 [2024-11-26 17:50:31.542442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.572591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.572632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:53:30.994 [2024-11-26 17:50:31.572647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.156 ms 00:53:30.994 [2024-11-26 17:50:31.572674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.591755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.591795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:53:30.994 [2024-11-26 17:50:31.591809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.054 ms 00:53:30.994 [2024-11-26 17:50:31.591821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.610128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.610162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:53:30.994 [2024-11-26 17:50:31.610175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.297 ms 00:53:30.994 [2024-11-26 17:50:31.610186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:30.994 [2024-11-26 17:50:31.611078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:30.994 [2024-11-26 17:50:31.611110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:53:30.994 [2024-11-26 17:50:31.611128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:53:30.994 [2024-11-26 17:50:31.611138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.711119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.711207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:53:31.254 [2024-11-26 17:50:31.711236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.115 ms 00:53:31.254 [2024-11-26 17:50:31.711248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.722774] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:53:31.254 [2024-11-26 17:50:31.727616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.727651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:53:31.254 [2024-11-26 17:50:31.727670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.316 ms 00:53:31.254 [2024-11-26 17:50:31.727682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.727814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.727829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:53:31.254 [2024-11-26 17:50:31.727846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:53:31.254 [2024-11-26 17:50:31.727858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.730203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.730242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:53:31.254 [2024-11-26 17:50:31.730256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:53:31.254 [2024-11-26 17:50:31.730267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.730316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.730329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:53:31.254 [2024-11-26 17:50:31.730340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:53:31.254 [2024-11-26 17:50:31.730351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.730401] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:53:31.254 [2024-11-26 17:50:31.730425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.730437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:53:31.254 [2024-11-26 17:50:31.730448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:53:31.254 [2024-11-26 17:50:31.730459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.768882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.769045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:53:31.254 [2024-11-26 17:50:31.769075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.464 ms 00:53:31.254 [2024-11-26 17:50:31.769087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.769211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:53:31.254 [2024-11-26 17:50:31.769226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:53:31.254 [2024-11-26 17:50:31.769238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:53:31.254 [2024-11-26 17:50:31.769249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:53:31.254 [2024-11-26 17:50:31.770824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 442.194 ms, result 0 00:53:32.630  [2024-11-26T17:50:34.261Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-26T17:50:35.198Z] Copying: 48/1024 [MB] (25 MBps) [2024-11-26T17:50:36.135Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-26T17:50:37.071Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-26T17:50:38.007Z] Copying: 122/1024 [MB] (25 MBps) [2024-11-26T17:50:39.386Z] Copying: 147/1024 [MB] (25 MBps) [2024-11-26T17:50:40.392Z] Copying: 172/1024 [MB] (25 MBps) [2024-11-26T17:50:41.329Z] Copying: 198/1024 [MB] (25 MBps) [2024-11-26T17:50:42.265Z] Copying: 223/1024 [MB] (24 MBps) [2024-11-26T17:50:43.202Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-26T17:50:44.140Z] Copying: 273/1024 [MB] (25 MBps) [2024-11-26T17:50:45.077Z] Copying: 298/1024 [MB] (25 MBps) [2024-11-26T17:50:46.013Z] Copying: 324/1024 [MB] (25 MBps) [2024-11-26T17:50:47.391Z] Copying: 348/1024 [MB] (24 MBps) [2024-11-26T17:50:48.327Z] Copying: 374/1024 [MB] (25 MBps) [2024-11-26T17:50:49.267Z] Copying: 399/1024 [MB] (25 MBps) [2024-11-26T17:50:50.205Z] Copying: 424/1024 [MB] (25 MBps) [2024-11-26T17:50:51.140Z] Copying: 450/1024 [MB] (25 MBps) [2024-11-26T17:50:52.122Z] Copying: 476/1024 [MB] (25 MBps) [2024-11-26T17:50:53.058Z] Copying: 501/1024 [MB] (25 MBps) [2024-11-26T17:50:53.992Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-26T17:50:55.365Z] Copying: 551/1024 [MB] (25 MBps) [2024-11-26T17:50:56.299Z] Copying: 576/1024 [MB] (25 MBps) [2024-11-26T17:50:57.233Z] Copying: 602/1024 [MB] (25 MBps) [2024-11-26T17:50:58.167Z] Copying: 628/1024 [MB] (26 MBps) [2024-11-26T17:50:59.100Z] Copying: 655/1024 [MB] (26 MBps) [2024-11-26T17:51:00.041Z] Copying: 682/1024 [MB] (27 MBps) [2024-11-26T17:51:00.978Z] Copying: 709/1024 [MB] (27 MBps) [2024-11-26T17:51:02.355Z] Copying: 736/1024 [MB] (26 MBps) [2024-11-26T17:51:03.293Z] Copying: 763/1024 [MB] (26 MBps) [2024-11-26T17:51:04.267Z] Copying: 790/1024 [MB] (26 MBps) [2024-11-26T17:51:05.201Z] Copying: 818/1024 [MB] (28 MBps) [2024-11-26T17:51:06.138Z] Copying: 846/1024 [MB] (27 MBps) [2024-11-26T17:51:07.075Z] Copying: 873/1024 [MB] (27 MBps) [2024-11-26T17:51:08.013Z] Copying: 899/1024 [MB] (25 MBps) [2024-11-26T17:51:08.951Z] Copying: 926/1024 [MB] (27 MBps) [2024-11-26T17:51:10.325Z] Copying: 953/1024 [MB] (26 MBps) [2024-11-26T17:51:11.260Z] Copying: 980/1024 [MB] (27 MBps) [2024-11-26T17:51:11.827Z] Copying: 1007/1024 [MB] (27 MBps) [2024-11-26T17:51:11.827Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-26 17:51:11.667206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.667316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:54:11.133 [2024-11-26 17:51:11.667363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:54:11.133 [2024-11-26 17:51:11.667394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.667434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:54:11.133 [2024-11-26 17:51:11.673588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.673649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:54:11.133 [2024-11-26 17:51:11.673667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:54:11.133 [2024-11-26 17:51:11.673680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.673951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.673965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:54:11.133 [2024-11-26 17:51:11.673979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:54:11.133 [2024-11-26 17:51:11.673997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.679448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.679665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:54:11.133 [2024-11-26 17:51:11.679764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.434 ms 00:54:11.133 [2024-11-26 17:51:11.679805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.684945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.685089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:54:11.133 [2024-11-26 17:51:11.685216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:54:11.133 [2024-11-26 17:51:11.685266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.727334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.727664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:54:11.133 [2024-11-26 17:51:11.727695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.052 ms 00:54:11.133 [2024-11-26 17:51:11.727707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.133 [2024-11-26 17:51:11.753658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.133 [2024-11-26 17:51:11.753896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:54:11.133 [2024-11-26 17:51:11.753927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.892 ms 00:54:11.133 [2024-11-26 17:51:11.753939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:11.894860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.393 [2024-11-26 17:51:11.894970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:54:11.393 [2024-11-26 17:51:11.894993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 141.046 ms 00:54:11.393 [2024-11-26 17:51:11.895005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:11.935159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.393 [2024-11-26 17:51:11.935263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:54:11.393 [2024-11-26 17:51:11.935286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.192 ms 00:54:11.393 [2024-11-26 17:51:11.935299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:11.975335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.393 [2024-11-26 17:51:11.975426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:54:11.393 [2024-11-26 17:51:11.975447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.006 ms 00:54:11.393 [2024-11-26 17:51:11.975458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:12.015282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.393 [2024-11-26 17:51:12.015377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:54:11.393 [2024-11-26 17:51:12.015399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.793 ms 00:54:11.393 [2024-11-26 17:51:12.015410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:12.056243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.393 [2024-11-26 17:51:12.056329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:54:11.393 [2024-11-26 17:51:12.056349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.731 ms 00:54:11.393 [2024-11-26 17:51:12.056360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.393 [2024-11-26 17:51:12.056433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:54:11.393 [2024-11-26 17:51:12.056456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:54:11.393 [2024-11-26 17:51:12.056471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:54:11.393 [2024-11-26 17:51:12.056657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.056996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:54:11.394 [2024-11-26 17:51:12.057648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:54:11.394 [2024-11-26 17:51:12.057659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: aa0c69f4-4c7a-4c1b-bad9-b00bee17d220 00:54:11.394 [2024-11-26 17:51:12.057672] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:54:11.394 [2024-11-26 17:51:12.057682] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20672 00:54:11.394 [2024-11-26 17:51:12.057693] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19712 00:54:11.395 [2024-11-26 17:51:12.057704] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0487 00:54:11.395 [2024-11-26 17:51:12.057725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:54:11.395 [2024-11-26 17:51:12.057752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:54:11.395 [2024-11-26 17:51:12.057763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:54:11.395 [2024-11-26 17:51:12.057772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:54:11.395 [2024-11-26 17:51:12.057782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:54:11.395 [2024-11-26 17:51:12.057793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.395 [2024-11-26 17:51:12.057804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:54:11.395 [2024-11-26 17:51:12.057816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:54:11.395 [2024-11-26 17:51:12.057827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.395 [2024-11-26 17:51:12.080441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.395 [2024-11-26 17:51:12.080740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:54:11.395 [2024-11-26 17:51:12.080784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.585 ms 00:54:11.395 [2024-11-26 17:51:12.080796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.395 [2024-11-26 17:51:12.081423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:11.395 [2024-11-26 17:51:12.081436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:54:11.395 [2024-11-26 17:51:12.081448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:54:11.395 [2024-11-26 17:51:12.081458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.655 [2024-11-26 17:51:12.138482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.655 [2024-11-26 17:51:12.138827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:54:11.655 [2024-11-26 17:51:12.138858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.655 [2024-11-26 17:51:12.138870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.655 [2024-11-26 17:51:12.138981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.655 [2024-11-26 17:51:12.138994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:54:11.655 [2024-11-26 17:51:12.139006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.655 [2024-11-26 17:51:12.139017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.655 [2024-11-26 17:51:12.139133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.655 [2024-11-26 17:51:12.139147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:54:11.655 [2024-11-26 17:51:12.139165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.655 [2024-11-26 17:51:12.139177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.655 [2024-11-26 17:51:12.139197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.655 [2024-11-26 17:51:12.139209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:54:11.655 [2024-11-26 17:51:12.139220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.655 [2024-11-26 17:51:12.139243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.655 [2024-11-26 17:51:12.281466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.655 [2024-11-26 17:51:12.281811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:54:11.655 [2024-11-26 17:51:12.281839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.655 [2024-11-26 17:51:12.281851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.395770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:54:11.915 [2024-11-26 17:51:12.396118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:54:11.915 [2024-11-26 17:51:12.396324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:54:11.915 [2024-11-26 17:51:12.396417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:54:11.915 [2024-11-26 17:51:12.396641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:54:11.915 [2024-11-26 17:51:12.396728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:54:11.915 [2024-11-26 17:51:12.396813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.396879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:54:11.915 [2024-11-26 17:51:12.396893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:54:11.915 [2024-11-26 17:51:12.396904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:54:11.915 [2024-11-26 17:51:12.396915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:11.915 [2024-11-26 17:51:12.397074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 731.012 ms, result 0 00:54:13.294 00:54:13.294 00:54:13.295 17:51:13 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:54:15.227 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:54:15.227 Process with pid 79147 is not found 00:54:15.227 Remove shared memory files 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79147 00:54:15.227 17:51:15 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79147 ']' 00:54:15.227 17:51:15 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79147 00:54:15.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79147) - No such process 00:54:15.227 17:51:15 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79147 is not found' 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:54:15.227 17:51:15 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:54:15.227 ************************************ 00:54:15.227 END TEST ftl_restore 00:54:15.227 ************************************ 00:54:15.227 00:54:15.227 real 3m21.905s 00:54:15.227 user 3m7.823s 00:54:15.227 sys 0m15.683s 00:54:15.227 17:51:15 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:15.227 17:51:15 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:54:15.227 17:51:15 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:54:15.227 17:51:15 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:54:15.227 17:51:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:15.227 17:51:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:54:15.227 ************************************ 00:54:15.227 START TEST ftl_dirty_shutdown 00:54:15.227 ************************************ 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:54:15.227 * Looking for test storage... 00:54:15.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:54:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:15.227 --rc genhtml_branch_coverage=1 00:54:15.227 --rc genhtml_function_coverage=1 00:54:15.227 --rc genhtml_legend=1 00:54:15.227 --rc geninfo_all_blocks=1 00:54:15.227 --rc geninfo_unexecuted_blocks=1 00:54:15.227 00:54:15.227 ' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:54:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:15.227 --rc genhtml_branch_coverage=1 00:54:15.227 --rc genhtml_function_coverage=1 00:54:15.227 --rc genhtml_legend=1 00:54:15.227 --rc geninfo_all_blocks=1 00:54:15.227 --rc geninfo_unexecuted_blocks=1 00:54:15.227 00:54:15.227 ' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:54:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:15.227 --rc genhtml_branch_coverage=1 00:54:15.227 --rc genhtml_function_coverage=1 00:54:15.227 --rc genhtml_legend=1 00:54:15.227 --rc geninfo_all_blocks=1 00:54:15.227 --rc geninfo_unexecuted_blocks=1 00:54:15.227 00:54:15.227 ' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:54:15.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:15.227 --rc genhtml_branch_coverage=1 00:54:15.227 --rc genhtml_function_coverage=1 00:54:15.227 --rc genhtml_legend=1 00:54:15.227 --rc geninfo_all_blocks=1 00:54:15.227 --rc geninfo_unexecuted_blocks=1 00:54:15.227 00:54:15.227 ' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:15.227 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:54:15.228 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81267 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81267 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81267 ']' 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:15.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:15.488 17:51:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:54:15.488 [2024-11-26 17:51:16.048411] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:54:15.488 [2024-11-26 17:51:16.048590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81267 ] 00:54:15.748 [2024-11-26 17:51:16.237880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:15.748 [2024-11-26 17:51:16.390521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:54:17.127 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:54:17.386 17:51:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:54:17.646 { 00:54:17.646 "name": "nvme0n1", 00:54:17.646 "aliases": [ 00:54:17.646 "69d4ab42-8ac3-4b7d-913f-7dbdfb29f4f9" 00:54:17.646 ], 00:54:17.646 "product_name": "NVMe disk", 00:54:17.646 "block_size": 4096, 00:54:17.646 "num_blocks": 1310720, 00:54:17.646 "uuid": "69d4ab42-8ac3-4b7d-913f-7dbdfb29f4f9", 00:54:17.646 "numa_id": -1, 00:54:17.646 "assigned_rate_limits": { 00:54:17.646 "rw_ios_per_sec": 0, 00:54:17.646 "rw_mbytes_per_sec": 0, 00:54:17.646 "r_mbytes_per_sec": 0, 00:54:17.646 "w_mbytes_per_sec": 0 00:54:17.646 }, 00:54:17.646 "claimed": true, 00:54:17.646 "claim_type": "read_many_write_one", 00:54:17.646 "zoned": false, 00:54:17.646 "supported_io_types": { 00:54:17.646 "read": true, 00:54:17.646 "write": true, 00:54:17.646 "unmap": true, 00:54:17.646 "flush": true, 00:54:17.646 "reset": true, 00:54:17.646 "nvme_admin": true, 00:54:17.646 "nvme_io": true, 00:54:17.646 "nvme_io_md": false, 00:54:17.646 "write_zeroes": true, 00:54:17.646 "zcopy": false, 00:54:17.646 "get_zone_info": false, 00:54:17.646 "zone_management": false, 00:54:17.646 "zone_append": false, 00:54:17.646 "compare": true, 00:54:17.646 "compare_and_write": false, 00:54:17.646 "abort": true, 00:54:17.646 "seek_hole": false, 00:54:17.646 "seek_data": false, 00:54:17.646 "copy": true, 00:54:17.646 "nvme_iov_md": false 00:54:17.646 }, 00:54:17.646 "driver_specific": { 00:54:17.646 "nvme": [ 00:54:17.646 { 00:54:17.646 "pci_address": "0000:00:11.0", 00:54:17.646 "trid": { 00:54:17.646 "trtype": "PCIe", 00:54:17.646 "traddr": "0000:00:11.0" 00:54:17.646 }, 00:54:17.646 "ctrlr_data": { 00:54:17.646 "cntlid": 0, 00:54:17.646 "vendor_id": "0x1b36", 00:54:17.646 "model_number": "QEMU NVMe Ctrl", 00:54:17.646 "serial_number": "12341", 00:54:17.646 "firmware_revision": "8.0.0", 00:54:17.646 "subnqn": "nqn.2019-08.org.qemu:12341", 00:54:17.646 "oacs": { 00:54:17.646 "security": 0, 00:54:17.646 "format": 1, 00:54:17.646 "firmware": 0, 00:54:17.646 "ns_manage": 1 00:54:17.646 }, 00:54:17.646 "multi_ctrlr": false, 00:54:17.646 "ana_reporting": false 00:54:17.646 }, 00:54:17.646 "vs": { 00:54:17.646 "nvme_version": "1.4" 00:54:17.646 }, 00:54:17.646 "ns_data": { 00:54:17.646 "id": 1, 00:54:17.646 "can_share": false 00:54:17.646 } 00:54:17.646 } 00:54:17.646 ], 00:54:17.646 "mp_policy": "active_passive" 00:54:17.646 } 00:54:17.646 } 00:54:17.646 ]' 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:54:17.646 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:54:17.906 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=52f4a77b-704d-480d-9ace-1e790dcc70a6 00:54:17.906 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:54:17.906 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52f4a77b-704d-480d-9ace-1e790dcc70a6 00:54:18.166 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:54:18.425 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=249ff458-f7da-4567-8bb6-3fedfa013c41 00:54:18.425 17:51:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 249ff458-f7da-4567-8bb6-3fedfa013c41 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:54:18.684 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:54:18.943 { 00:54:18.943 "name": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:18.943 "aliases": [ 00:54:18.943 "lvs/nvme0n1p0" 00:54:18.943 ], 00:54:18.943 "product_name": "Logical Volume", 00:54:18.943 "block_size": 4096, 00:54:18.943 "num_blocks": 26476544, 00:54:18.943 "uuid": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:18.943 "assigned_rate_limits": { 00:54:18.943 "rw_ios_per_sec": 0, 00:54:18.943 "rw_mbytes_per_sec": 0, 00:54:18.943 "r_mbytes_per_sec": 0, 00:54:18.943 "w_mbytes_per_sec": 0 00:54:18.943 }, 00:54:18.943 "claimed": false, 00:54:18.943 "zoned": false, 00:54:18.943 "supported_io_types": { 00:54:18.943 "read": true, 00:54:18.943 "write": true, 00:54:18.943 "unmap": true, 00:54:18.943 "flush": false, 00:54:18.943 "reset": true, 00:54:18.943 "nvme_admin": false, 00:54:18.943 "nvme_io": false, 00:54:18.943 "nvme_io_md": false, 00:54:18.943 "write_zeroes": true, 00:54:18.943 "zcopy": false, 00:54:18.943 "get_zone_info": false, 00:54:18.943 "zone_management": false, 00:54:18.943 "zone_append": false, 00:54:18.943 "compare": false, 00:54:18.943 "compare_and_write": false, 00:54:18.943 "abort": false, 00:54:18.943 "seek_hole": true, 00:54:18.943 "seek_data": true, 00:54:18.943 "copy": false, 00:54:18.943 "nvme_iov_md": false 00:54:18.943 }, 00:54:18.943 "driver_specific": { 00:54:18.943 "lvol": { 00:54:18.943 "lvol_store_uuid": "249ff458-f7da-4567-8bb6-3fedfa013c41", 00:54:18.943 "base_bdev": "nvme0n1", 00:54:18.943 "thin_provision": true, 00:54:18.943 "num_allocated_clusters": 0, 00:54:18.943 "snapshot": false, 00:54:18.943 "clone": false, 00:54:18.943 "esnap_clone": false 00:54:18.943 } 00:54:18.943 } 00:54:18.943 } 00:54:18.943 ]' 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:54:18.943 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:54:19.201 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:54:19.202 17:51:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.460 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:54:19.460 { 00:54:19.460 "name": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:19.460 "aliases": [ 00:54:19.460 "lvs/nvme0n1p0" 00:54:19.460 ], 00:54:19.460 "product_name": "Logical Volume", 00:54:19.460 "block_size": 4096, 00:54:19.460 "num_blocks": 26476544, 00:54:19.460 "uuid": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:19.460 "assigned_rate_limits": { 00:54:19.460 "rw_ios_per_sec": 0, 00:54:19.460 "rw_mbytes_per_sec": 0, 00:54:19.460 "r_mbytes_per_sec": 0, 00:54:19.460 "w_mbytes_per_sec": 0 00:54:19.460 }, 00:54:19.460 "claimed": false, 00:54:19.460 "zoned": false, 00:54:19.460 "supported_io_types": { 00:54:19.460 "read": true, 00:54:19.460 "write": true, 00:54:19.460 "unmap": true, 00:54:19.460 "flush": false, 00:54:19.460 "reset": true, 00:54:19.460 "nvme_admin": false, 00:54:19.460 "nvme_io": false, 00:54:19.460 "nvme_io_md": false, 00:54:19.460 "write_zeroes": true, 00:54:19.460 "zcopy": false, 00:54:19.460 "get_zone_info": false, 00:54:19.460 "zone_management": false, 00:54:19.460 "zone_append": false, 00:54:19.460 "compare": false, 00:54:19.460 "compare_and_write": false, 00:54:19.460 "abort": false, 00:54:19.460 "seek_hole": true, 00:54:19.460 "seek_data": true, 00:54:19.460 "copy": false, 00:54:19.460 "nvme_iov_md": false 00:54:19.460 }, 00:54:19.460 "driver_specific": { 00:54:19.460 "lvol": { 00:54:19.460 "lvol_store_uuid": "249ff458-f7da-4567-8bb6-3fedfa013c41", 00:54:19.460 "base_bdev": "nvme0n1", 00:54:19.460 "thin_provision": true, 00:54:19.460 "num_allocated_clusters": 0, 00:54:19.460 "snapshot": false, 00:54:19.460 "clone": false, 00:54:19.460 "esnap_clone": false 00:54:19.460 } 00:54:19.460 } 00:54:19.460 } 00:54:19.460 ]' 00:54:19.460 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:54:19.460 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:54:19.460 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:54:19.719 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 29228a67-7a91-41bf-bbb1-f37800f261aa 00:54:19.978 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:54:19.978 { 00:54:19.978 "name": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:19.978 "aliases": [ 00:54:19.978 "lvs/nvme0n1p0" 00:54:19.978 ], 00:54:19.978 "product_name": "Logical Volume", 00:54:19.978 "block_size": 4096, 00:54:19.978 "num_blocks": 26476544, 00:54:19.978 "uuid": "29228a67-7a91-41bf-bbb1-f37800f261aa", 00:54:19.978 "assigned_rate_limits": { 00:54:19.978 "rw_ios_per_sec": 0, 00:54:19.978 "rw_mbytes_per_sec": 0, 00:54:19.978 "r_mbytes_per_sec": 0, 00:54:19.978 "w_mbytes_per_sec": 0 00:54:19.978 }, 00:54:19.978 "claimed": false, 00:54:19.978 "zoned": false, 00:54:19.978 "supported_io_types": { 00:54:19.978 "read": true, 00:54:19.978 "write": true, 00:54:19.978 "unmap": true, 00:54:19.978 "flush": false, 00:54:19.978 "reset": true, 00:54:19.978 "nvme_admin": false, 00:54:19.978 "nvme_io": false, 00:54:19.978 "nvme_io_md": false, 00:54:19.978 "write_zeroes": true, 00:54:19.978 "zcopy": false, 00:54:19.978 "get_zone_info": false, 00:54:19.978 "zone_management": false, 00:54:19.978 "zone_append": false, 00:54:19.978 "compare": false, 00:54:19.978 "compare_and_write": false, 00:54:19.978 "abort": false, 00:54:19.978 "seek_hole": true, 00:54:19.978 "seek_data": true, 00:54:19.978 "copy": false, 00:54:19.978 "nvme_iov_md": false 00:54:19.978 }, 00:54:19.978 "driver_specific": { 00:54:19.978 "lvol": { 00:54:19.978 "lvol_store_uuid": "249ff458-f7da-4567-8bb6-3fedfa013c41", 00:54:19.978 "base_bdev": "nvme0n1", 00:54:19.978 "thin_provision": true, 00:54:19.978 "num_allocated_clusters": 0, 00:54:19.978 "snapshot": false, 00:54:19.978 "clone": false, 00:54:19.978 "esnap_clone": false 00:54:19.978 } 00:54:19.978 } 00:54:19.978 } 00:54:19.978 ]' 00:54:19.978 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 29228a67-7a91-41bf-bbb1-f37800f261aa --l2p_dram_limit 10' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:54:20.237 17:51:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 29228a67-7a91-41bf-bbb1-f37800f261aa --l2p_dram_limit 10 -c nvc0n1p0 00:54:20.497 [2024-11-26 17:51:20.931800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.931884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:54:20.498 [2024-11-26 17:51:20.931908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:54:20.498 [2024-11-26 17:51:20.931920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.932018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.932033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:54:20.498 [2024-11-26 17:51:20.932048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:54:20.498 [2024-11-26 17:51:20.932059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.932087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:54:20.498 [2024-11-26 17:51:20.933270] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:54:20.498 [2024-11-26 17:51:20.933312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.933324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:54:20.498 [2024-11-26 17:51:20.933339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:54:20.498 [2024-11-26 17:51:20.933350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.933447] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50c977a4-dc9e-442a-be46-ef9fda80b8fe 00:54:20.498 [2024-11-26 17:51:20.935972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.936019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:54:20.498 [2024-11-26 17:51:20.936034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:54:20.498 [2024-11-26 17:51:20.936051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.950744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.951063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:54:20.498 [2024-11-26 17:51:20.951092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.638 ms 00:54:20.498 [2024-11-26 17:51:20.951107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.951264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.951281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:54:20.498 [2024-11-26 17:51:20.951293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:54:20.498 [2024-11-26 17:51:20.951313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.951418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.951435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:54:20.498 [2024-11-26 17:51:20.951451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:54:20.498 [2024-11-26 17:51:20.951465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.951518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:54:20.498 [2024-11-26 17:51:20.957470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.957525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:54:20.498 [2024-11-26 17:51:20.957544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.989 ms 00:54:20.498 [2024-11-26 17:51:20.957555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.957603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.957615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:54:20.498 [2024-11-26 17:51:20.957629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:54:20.498 [2024-11-26 17:51:20.957640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.957688] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:54:20.498 [2024-11-26 17:51:20.957834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:54:20.498 [2024-11-26 17:51:20.957857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:54:20.498 [2024-11-26 17:51:20.957872] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:54:20.498 [2024-11-26 17:51:20.957889] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:54:20.498 [2024-11-26 17:51:20.957902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:54:20.498 [2024-11-26 17:51:20.957917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:54:20.498 [2024-11-26 17:51:20.957931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:54:20.498 [2024-11-26 17:51:20.957945] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:54:20.498 [2024-11-26 17:51:20.957955] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:54:20.498 [2024-11-26 17:51:20.957969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.957993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:54:20.498 [2024-11-26 17:51:20.958008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:54:20.498 [2024-11-26 17:51:20.958018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.958100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.498 [2024-11-26 17:51:20.958112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:54:20.498 [2024-11-26 17:51:20.958126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:54:20.498 [2024-11-26 17:51:20.958136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.498 [2024-11-26 17:51:20.958244] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:54:20.498 [2024-11-26 17:51:20.958257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:54:20.498 [2024-11-26 17:51:20.958273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:54:20.498 [2024-11-26 17:51:20.958283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:54:20.498 [2024-11-26 17:51:20.958307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:54:20.498 [2024-11-26 17:51:20.958329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:54:20.498 [2024-11-26 17:51:20.958342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:54:20.498 [2024-11-26 17:51:20.958364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:54:20.498 [2024-11-26 17:51:20.958375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:54:20.498 [2024-11-26 17:51:20.958387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:54:20.498 [2024-11-26 17:51:20.958397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:54:20.498 [2024-11-26 17:51:20.958409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:54:20.498 [2024-11-26 17:51:20.958420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:54:20.498 [2024-11-26 17:51:20.958447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:54:20.498 [2024-11-26 17:51:20.958462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:54:20.498 [2024-11-26 17:51:20.958484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:54:20.498 [2024-11-26 17:51:20.958718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:54:20.498 [2024-11-26 17:51:20.958749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:54:20.498 [2024-11-26 17:51:20.958782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:54:20.498 [2024-11-26 17:51:20.958812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:54:20.498 [2024-11-26 17:51:20.958843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:54:20.499 [2024-11-26 17:51:20.958872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:54:20.499 [2024-11-26 17:51:20.958968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:54:20.499 [2024-11-26 17:51:20.959003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:54:20.499 [2024-11-26 17:51:20.959036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:54:20.499 [2024-11-26 17:51:20.959066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:54:20.499 [2024-11-26 17:51:20.959102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:54:20.499 [2024-11-26 17:51:20.959131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:54:20.499 [2024-11-26 17:51:20.959163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:54:20.499 [2024-11-26 17:51:20.959235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:54:20.499 [2024-11-26 17:51:20.959273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:54:20.499 [2024-11-26 17:51:20.959303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:54:20.499 [2024-11-26 17:51:20.959336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:54:20.499 [2024-11-26 17:51:20.959365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.499 [2024-11-26 17:51:20.959409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:54:20.499 [2024-11-26 17:51:20.959438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:54:20.499 [2024-11-26 17:51:20.959551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.499 [2024-11-26 17:51:20.959587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:54:20.499 [2024-11-26 17:51:20.959622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:54:20.499 [2024-11-26 17:51:20.959652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:54:20.499 [2024-11-26 17:51:20.959688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:54:20.499 [2024-11-26 17:51:20.959794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:54:20.499 [2024-11-26 17:51:20.959831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:54:20.499 [2024-11-26 17:51:20.959861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:54:20.499 [2024-11-26 17:51:20.959928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:54:20.499 [2024-11-26 17:51:20.959961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:54:20.499 [2024-11-26 17:51:20.960038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:54:20.499 [2024-11-26 17:51:20.960085] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:54:20.499 [2024-11-26 17:51:20.960166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:54:20.499 [2024-11-26 17:51:20.960195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:54:20.499 [2024-11-26 17:51:20.960206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:54:20.499 [2024-11-26 17:51:20.960219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:54:20.499 [2024-11-26 17:51:20.960230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:54:20.499 [2024-11-26 17:51:20.960244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:54:20.499 [2024-11-26 17:51:20.960254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:54:20.499 [2024-11-26 17:51:20.960268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:54:20.499 [2024-11-26 17:51:20.960278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:54:20.499 [2024-11-26 17:51:20.960295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:54:20.499 [2024-11-26 17:51:20.960355] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:54:20.499 [2024-11-26 17:51:20.960370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:54:20.499 [2024-11-26 17:51:20.960397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:54:20.499 [2024-11-26 17:51:20.960408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:54:20.499 [2024-11-26 17:51:20.960422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:54:20.499 [2024-11-26 17:51:20.960435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:20.499 [2024-11-26 17:51:20.960450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:54:20.499 [2024-11-26 17:51:20.960461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:54:20.499 [2024-11-26 17:51:20.960475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:20.499 [2024-11-26 17:51:20.960550] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:54:20.499 [2024-11-26 17:51:20.960571] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:54:24.695 [2024-11-26 17:51:24.691126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.691238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:54:24.695 [2024-11-26 17:51:24.691261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3736.628 ms 00:54:24.695 [2024-11-26 17:51:24.691276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.742847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.742932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:54:24.695 [2024-11-26 17:51:24.742952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.212 ms 00:54:24.695 [2024-11-26 17:51:24.742967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.743188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.743208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:54:24.695 [2024-11-26 17:51:24.743221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:54:24.695 [2024-11-26 17:51:24.743244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.799862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.799945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:54:24.695 [2024-11-26 17:51:24.799965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.621 ms 00:54:24.695 [2024-11-26 17:51:24.799979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.800064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.800080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:54:24.695 [2024-11-26 17:51:24.800093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:54:24.695 [2024-11-26 17:51:24.800121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.801002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.801033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:54:24.695 [2024-11-26 17:51:24.801046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:54:24.695 [2024-11-26 17:51:24.801060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.801189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.801213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:54:24.695 [2024-11-26 17:51:24.801225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:54:24.695 [2024-11-26 17:51:24.801242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.827239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.827340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:54:24.695 [2024-11-26 17:51:24.827360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.012 ms 00:54:24.695 [2024-11-26 17:51:24.827385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.695 [2024-11-26 17:51:24.856063] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:54:24.695 [2024-11-26 17:51:24.861430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.695 [2024-11-26 17:51:24.861491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:54:24.695 [2024-11-26 17:51:24.861528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.887 ms 00:54:24.696 [2024-11-26 17:51:24.861540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:24.962405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:24.962512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:54:24.696 [2024-11-26 17:51:24.962543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.938 ms 00:54:24.696 [2024-11-26 17:51:24.962556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:24.962817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:24.962832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:54:24.696 [2024-11-26 17:51:24.962853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:54:24.696 [2024-11-26 17:51:24.962863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.004162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.004248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:54:24.696 [2024-11-26 17:51:25.004274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.235 ms 00:54:24.696 [2024-11-26 17:51:25.004287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.042903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.042999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:54:24.696 [2024-11-26 17:51:25.043025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.575 ms 00:54:24.696 [2024-11-26 17:51:25.043037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.043807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.043829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:54:24.696 [2024-11-26 17:51:25.043850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:54:24.696 [2024-11-26 17:51:25.043861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.150342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.150418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:54:24.696 [2024-11-26 17:51:25.150447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.551 ms 00:54:24.696 [2024-11-26 17:51:25.150459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.189999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.190051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:54:24.696 [2024-11-26 17:51:25.190072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.476 ms 00:54:24.696 [2024-11-26 17:51:25.190084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.230441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.230534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:54:24.696 [2024-11-26 17:51:25.230556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.358 ms 00:54:24.696 [2024-11-26 17:51:25.230568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.268571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.268625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:54:24.696 [2024-11-26 17:51:25.268646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.006 ms 00:54:24.696 [2024-11-26 17:51:25.268658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.268713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.268726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:54:24.696 [2024-11-26 17:51:25.268745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:54:24.696 [2024-11-26 17:51:25.268756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.268880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:54:24.696 [2024-11-26 17:51:25.268898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:54:24.696 [2024-11-26 17:51:25.268912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:54:24.696 [2024-11-26 17:51:25.268923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:54:24.696 [2024-11-26 17:51:25.270395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4345.090 ms, result 0 00:54:24.696 { 00:54:24.696 "name": "ftl0", 00:54:24.696 "uuid": "50c977a4-dc9e-442a-be46-ef9fda80b8fe" 00:54:24.696 } 00:54:24.696 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:54:24.696 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:54:24.955 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:54:24.955 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:54:24.955 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:54:25.214 /dev/nbd0 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:54:25.215 1+0 records in 00:54:25.215 1+0 records out 00:54:25.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367176 s, 11.2 MB/s 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:54:25.215 17:51:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:54:25.475 [2024-11-26 17:51:25.957833] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:54:25.475 [2024-11-26 17:51:25.958343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81420 ] 00:54:25.475 [2024-11-26 17:51:26.148081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:25.736 [2024-11-26 17:51:26.301803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:27.111  [2024-11-26T17:51:28.738Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-26T17:51:30.111Z] Copying: 377/1024 [MB] (190 MBps) [2024-11-26T17:51:31.044Z] Copying: 570/1024 [MB] (192 MBps) [2024-11-26T17:51:31.980Z] Copying: 758/1024 [MB] (187 MBps) [2024-11-26T17:51:32.239Z] Copying: 941/1024 [MB] (183 MBps) [2024-11-26T17:51:33.625Z] Copying: 1024/1024 [MB] (average 187 MBps) 00:54:32.931 00:54:32.931 17:51:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:54:34.837 17:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:54:34.837 [2024-11-26 17:51:35.527667] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:54:34.837 [2024-11-26 17:51:35.527808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81513 ] 00:54:35.096 [2024-11-26 17:51:35.711723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:35.356 [2024-11-26 17:51:35.859319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:36.733  [2024-11-26T17:51:38.362Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-26T17:51:39.308Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-26T17:51:40.686Z] Copying: 50/1024 [MB] (16 MBps) [2024-11-26T17:51:41.260Z] Copying: 67/1024 [MB] (17 MBps) [2024-11-26T17:51:42.637Z] Copying: 84/1024 [MB] (16 MBps) [2024-11-26T17:51:43.574Z] Copying: 101/1024 [MB] (17 MBps) [2024-11-26T17:51:44.510Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-26T17:51:45.446Z] Copying: 133/1024 [MB] (15 MBps) [2024-11-26T17:51:46.411Z] Copying: 149/1024 [MB] (15 MBps) [2024-11-26T17:51:47.348Z] Copying: 166/1024 [MB] (17 MBps) [2024-11-26T17:51:48.286Z] Copying: 183/1024 [MB] (16 MBps) [2024-11-26T17:51:49.664Z] Copying: 200/1024 [MB] (16 MBps) [2024-11-26T17:51:50.599Z] Copying: 217/1024 [MB] (16 MBps) [2024-11-26T17:51:51.537Z] Copying: 234/1024 [MB] (16 MBps) [2024-11-26T17:51:52.475Z] Copying: 251/1024 [MB] (17 MBps) [2024-11-26T17:51:53.411Z] Copying: 268/1024 [MB] (17 MBps) [2024-11-26T17:51:54.350Z] Copying: 287/1024 [MB] (18 MBps) [2024-11-26T17:51:55.288Z] Copying: 304/1024 [MB] (17 MBps) [2024-11-26T17:51:56.668Z] Copying: 321/1024 [MB] (17 MBps) [2024-11-26T17:51:57.237Z] Copying: 337/1024 [MB] (16 MBps) [2024-11-26T17:51:58.617Z] Copying: 354/1024 [MB] (16 MBps) [2024-11-26T17:51:59.553Z] Copying: 370/1024 [MB] (16 MBps) [2024-11-26T17:52:00.491Z] Copying: 387/1024 [MB] (16 MBps) [2024-11-26T17:52:01.428Z] Copying: 403/1024 [MB] (16 MBps) [2024-11-26T17:52:02.364Z] Copying: 421/1024 [MB] (17 MBps) [2024-11-26T17:52:03.412Z] Copying: 438/1024 [MB] (17 MBps) [2024-11-26T17:52:04.348Z] Copying: 455/1024 [MB] (16 MBps) [2024-11-26T17:52:05.281Z] Copying: 472/1024 [MB] (16 MBps) [2024-11-26T17:52:06.216Z] Copying: 488/1024 [MB] (16 MBps) [2024-11-26T17:52:07.595Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-26T17:52:08.531Z] Copying: 523/1024 [MB] (17 MBps) [2024-11-26T17:52:09.465Z] Copying: 541/1024 [MB] (17 MBps) [2024-11-26T17:52:10.400Z] Copying: 558/1024 [MB] (17 MBps) [2024-11-26T17:52:11.427Z] Copying: 575/1024 [MB] (16 MBps) [2024-11-26T17:52:12.428Z] Copying: 592/1024 [MB] (16 MBps) [2024-11-26T17:52:13.362Z] Copying: 610/1024 [MB] (17 MBps) [2024-11-26T17:52:14.297Z] Copying: 627/1024 [MB] (17 MBps) [2024-11-26T17:52:15.233Z] Copying: 644/1024 [MB] (17 MBps) [2024-11-26T17:52:16.610Z] Copying: 661/1024 [MB] (17 MBps) [2024-11-26T17:52:17.546Z] Copying: 680/1024 [MB] (18 MBps) [2024-11-26T17:52:18.482Z] Copying: 697/1024 [MB] (17 MBps) [2024-11-26T17:52:19.417Z] Copying: 714/1024 [MB] (17 MBps) [2024-11-26T17:52:20.354Z] Copying: 732/1024 [MB] (17 MBps) [2024-11-26T17:52:21.291Z] Copying: 749/1024 [MB] (17 MBps) [2024-11-26T17:52:22.228Z] Copying: 767/1024 [MB] (17 MBps) [2024-11-26T17:52:23.615Z] Copying: 784/1024 [MB] (17 MBps) [2024-11-26T17:52:24.198Z] Copying: 801/1024 [MB] (17 MBps) [2024-11-26T17:52:25.574Z] Copying: 818/1024 [MB] (16 MBps) [2024-11-26T17:52:26.512Z] Copying: 835/1024 [MB] (17 MBps) [2024-11-26T17:52:27.450Z] Copying: 853/1024 [MB] (17 MBps) [2024-11-26T17:52:28.388Z] Copying: 871/1024 [MB] (17 MBps) [2024-11-26T17:52:29.325Z] Copying: 888/1024 [MB] (17 MBps) [2024-11-26T17:52:30.261Z] Copying: 906/1024 [MB] (17 MBps) [2024-11-26T17:52:31.198Z] Copying: 923/1024 [MB] (17 MBps) [2024-11-26T17:52:32.575Z] Copying: 940/1024 [MB] (17 MBps) [2024-11-26T17:52:33.573Z] Copying: 959/1024 [MB] (18 MBps) [2024-11-26T17:52:34.509Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-26T17:52:35.447Z] Copying: 994/1024 [MB] (17 MBps) [2024-11-26T17:52:36.016Z] Copying: 1012/1024 [MB] (17 MBps) [2024-11-26T17:52:37.396Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:55:36.702 00:55:36.702 17:52:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:55:36.702 17:52:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:55:36.702 17:52:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:55:36.963 [2024-11-26 17:52:37.567194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.567269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:55:36.963 [2024-11-26 17:52:37.567305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:55:36.963 [2024-11-26 17:52:37.567323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:36.963 [2024-11-26 17:52:37.567355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:55:36.963 [2024-11-26 17:52:37.572081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.572121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:55:36.963 [2024-11-26 17:52:37.572139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:55:36.963 [2024-11-26 17:52:37.572150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:36.963 [2024-11-26 17:52:37.574447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.574492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:55:36.963 [2024-11-26 17:52:37.574523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:55:36.963 [2024-11-26 17:52:37.574533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:36.963 [2024-11-26 17:52:37.592663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.592706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:55:36.963 [2024-11-26 17:52:37.592725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.125 ms 00:55:36.963 [2024-11-26 17:52:37.592736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:36.963 [2024-11-26 17:52:37.597755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.597797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:55:36.963 [2024-11-26 17:52:37.597817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:55:36.963 [2024-11-26 17:52:37.597827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:36.963 [2024-11-26 17:52:37.634685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:36.963 [2024-11-26 17:52:37.634852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:55:36.963 [2024-11-26 17:52:37.634897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.828 ms 00:55:36.963 [2024-11-26 17:52:37.634909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.657130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.657170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:55:37.224 [2024-11-26 17:52:37.657193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.203 ms 00:55:37.224 [2024-11-26 17:52:37.657205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.657372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.657387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:55:37.224 [2024-11-26 17:52:37.657402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:55:37.224 [2024-11-26 17:52:37.657413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.693995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.694029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:55:37.224 [2024-11-26 17:52:37.694046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.617 ms 00:55:37.224 [2024-11-26 17:52:37.694072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.729571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.729607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:55:37.224 [2024-11-26 17:52:37.729623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.509 ms 00:55:37.224 [2024-11-26 17:52:37.729633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.764436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.764474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:55:37.224 [2024-11-26 17:52:37.764491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.808 ms 00:55:37.224 [2024-11-26 17:52:37.764520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.799882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.224 [2024-11-26 17:52:37.799917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:55:37.224 [2024-11-26 17:52:37.799934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.302 ms 00:55:37.224 [2024-11-26 17:52:37.799943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.224 [2024-11-26 17:52:37.799999] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:55:37.224 [2024-11-26 17:52:37.800018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:55:37.224 [2024-11-26 17:52:37.800427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.800987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:55:37.225 [2024-11-26 17:52:37.801350] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:55:37.225 [2024-11-26 17:52:37.801362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50c977a4-dc9e-442a-be46-ef9fda80b8fe 00:55:37.225 [2024-11-26 17:52:37.801374] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:55:37.225 [2024-11-26 17:52:37.801391] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:55:37.225 [2024-11-26 17:52:37.801405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:55:37.225 [2024-11-26 17:52:37.801419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:55:37.225 [2024-11-26 17:52:37.801429] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:55:37.225 [2024-11-26 17:52:37.801464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:55:37.225 [2024-11-26 17:52:37.801474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:55:37.225 [2024-11-26 17:52:37.801486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:55:37.225 [2024-11-26 17:52:37.801496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:55:37.225 [2024-11-26 17:52:37.801518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.225 [2024-11-26 17:52:37.801529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:55:37.225 [2024-11-26 17:52:37.801543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:55:37.225 [2024-11-26 17:52:37.801554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.225 [2024-11-26 17:52:37.821648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.225 [2024-11-26 17:52:37.821681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:55:37.225 [2024-11-26 17:52:37.821697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.070 ms 00:55:37.225 [2024-11-26 17:52:37.821723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.225 [2024-11-26 17:52:37.822337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:37.225 [2024-11-26 17:52:37.822358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:55:37.225 [2024-11-26 17:52:37.822374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:55:37.225 [2024-11-26 17:52:37.822384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.225 [2024-11-26 17:52:37.892846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.225 [2024-11-26 17:52:37.892888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:55:37.225 [2024-11-26 17:52:37.892906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.225 [2024-11-26 17:52:37.892918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.225 [2024-11-26 17:52:37.892996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.225 [2024-11-26 17:52:37.893008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:55:37.226 [2024-11-26 17:52:37.893023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.226 [2024-11-26 17:52:37.893034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.226 [2024-11-26 17:52:37.893156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.226 [2024-11-26 17:52:37.893174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:55:37.226 [2024-11-26 17:52:37.893189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.226 [2024-11-26 17:52:37.893200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.226 [2024-11-26 17:52:37.893230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.226 [2024-11-26 17:52:37.893242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:55:37.226 [2024-11-26 17:52:37.893256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.226 [2024-11-26 17:52:37.893267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.485 [2024-11-26 17:52:38.030256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.485 [2024-11-26 17:52:38.030334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:55:37.485 [2024-11-26 17:52:38.030356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.485 [2024-11-26 17:52:38.030367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.485 [2024-11-26 17:52:38.136983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.485 [2024-11-26 17:52:38.137059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:55:37.485 [2024-11-26 17:52:38.137080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.485 [2024-11-26 17:52:38.137092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.485 [2024-11-26 17:52:38.137265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.485 [2024-11-26 17:52:38.137280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:55:37.485 [2024-11-26 17:52:38.137300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.485 [2024-11-26 17:52:38.137311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.485 [2024-11-26 17:52:38.137383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.485 [2024-11-26 17:52:38.137397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:55:37.485 [2024-11-26 17:52:38.137412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.485 [2024-11-26 17:52:38.137423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.485 [2024-11-26 17:52:38.137569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.486 [2024-11-26 17:52:38.137584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:55:37.486 [2024-11-26 17:52:38.137599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.486 [2024-11-26 17:52:38.137613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.486 [2024-11-26 17:52:38.137664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.486 [2024-11-26 17:52:38.137679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:55:37.486 [2024-11-26 17:52:38.137693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.486 [2024-11-26 17:52:38.137704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.486 [2024-11-26 17:52:38.137760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.486 [2024-11-26 17:52:38.137771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:55:37.486 [2024-11-26 17:52:38.137785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.486 [2024-11-26 17:52:38.137800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.486 [2024-11-26 17:52:38.137863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:55:37.486 [2024-11-26 17:52:38.137875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:55:37.486 [2024-11-26 17:52:38.137889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:55:37.486 [2024-11-26 17:52:38.137900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:37.486 [2024-11-26 17:52:38.138076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.758 ms, result 0 00:55:37.486 true 00:55:37.486 17:52:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81267 00:55:37.486 17:52:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81267 00:55:37.746 17:52:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:55:37.746 [2024-11-26 17:52:38.284104] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:55:37.746 [2024-11-26 17:52:38.284380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82142 ] 00:55:38.005 [2024-11-26 17:52:38.469244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:38.005 [2024-11-26 17:52:38.608479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:39.384  [2024-11-26T17:52:41.015Z] Copying: 197/1024 [MB] (197 MBps) [2024-11-26T17:52:42.412Z] Copying: 385/1024 [MB] (187 MBps) [2024-11-26T17:52:43.350Z] Copying: 578/1024 [MB] (192 MBps) [2024-11-26T17:52:44.290Z] Copying: 771/1024 [MB] (193 MBps) [2024-11-26T17:52:44.550Z] Copying: 966/1024 [MB] (194 MBps) [2024-11-26T17:52:45.929Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:55:45.235 00:55:45.235 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81267 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:55:45.235 17:52:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:55:45.235 [2024-11-26 17:52:45.687125] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:55:45.235 [2024-11-26 17:52:45.687259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82218 ] 00:55:45.235 [2024-11-26 17:52:45.872193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:45.494 [2024-11-26 17:52:46.016728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:46.062 [2024-11-26 17:52:46.462789] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:55:46.062 [2024-11-26 17:52:46.462877] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:55:46.062 [2024-11-26 17:52:46.530242] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:55:46.062 [2024-11-26 17:52:46.530835] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:55:46.062 [2024-11-26 17:52:46.531098] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:55:46.323 [2024-11-26 17:52:46.831744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.323 [2024-11-26 17:52:46.831994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:55:46.323 [2024-11-26 17:52:46.832022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:55:46.323 [2024-11-26 17:52:46.832040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.323 [2024-11-26 17:52:46.832115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.323 [2024-11-26 17:52:46.832128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:55:46.323 [2024-11-26 17:52:46.832139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:55:46.323 [2024-11-26 17:52:46.832149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.323 [2024-11-26 17:52:46.832173] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:55:46.323 [2024-11-26 17:52:46.833109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:55:46.323 [2024-11-26 17:52:46.833131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.323 [2024-11-26 17:52:46.833143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:55:46.323 [2024-11-26 17:52:46.833154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:55:46.323 [2024-11-26 17:52:46.833165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.323 [2024-11-26 17:52:46.835638] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:55:46.323 [2024-11-26 17:52:46.856772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.323 [2024-11-26 17:52:46.856812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:55:46.323 [2024-11-26 17:52:46.856829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.169 ms 00:55:46.323 [2024-11-26 17:52:46.856842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.323 [2024-11-26 17:52:46.856922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.323 [2024-11-26 17:52:46.856937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:55:46.324 [2024-11-26 17:52:46.856949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:55:46.324 [2024-11-26 17:52:46.856960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.870144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.870179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:55:46.324 [2024-11-26 17:52:46.870193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.125 ms 00:55:46.324 [2024-11-26 17:52:46.870204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.870308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.870323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:55:46.324 [2024-11-26 17:52:46.870336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:55:46.324 [2024-11-26 17:52:46.870348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.870420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.870434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:55:46.324 [2024-11-26 17:52:46.870446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:55:46.324 [2024-11-26 17:52:46.870456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.870486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:55:46.324 [2024-11-26 17:52:46.876374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.876409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:55:46.324 [2024-11-26 17:52:46.876422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.906 ms 00:55:46.324 [2024-11-26 17:52:46.876434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.876468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.876480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:55:46.324 [2024-11-26 17:52:46.876491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:55:46.324 [2024-11-26 17:52:46.876515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.876562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:55:46.324 [2024-11-26 17:52:46.876591] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:55:46.324 [2024-11-26 17:52:46.876633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:55:46.324 [2024-11-26 17:52:46.876653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:55:46.324 [2024-11-26 17:52:46.876751] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:55:46.324 [2024-11-26 17:52:46.876765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:55:46.324 [2024-11-26 17:52:46.876780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:55:46.324 [2024-11-26 17:52:46.876797] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:55:46.324 [2024-11-26 17:52:46.876810] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:55:46.324 [2024-11-26 17:52:46.876822] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:55:46.324 [2024-11-26 17:52:46.876833] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:55:46.324 [2024-11-26 17:52:46.876844] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:55:46.324 [2024-11-26 17:52:46.876855] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:55:46.324 [2024-11-26 17:52:46.876865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.876876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:55:46.324 [2024-11-26 17:52:46.876886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:55:46.324 [2024-11-26 17:52:46.876897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.876974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.324 [2024-11-26 17:52:46.876990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:55:46.324 [2024-11-26 17:52:46.877001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:55:46.324 [2024-11-26 17:52:46.877012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.324 [2024-11-26 17:52:46.877112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:55:46.324 [2024-11-26 17:52:46.877128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:55:46.324 [2024-11-26 17:52:46.877140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:55:46.324 [2024-11-26 17:52:46.877170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:55:46.324 [2024-11-26 17:52:46.877200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:55:46.324 [2024-11-26 17:52:46.877232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:55:46.324 [2024-11-26 17:52:46.877242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:55:46.324 [2024-11-26 17:52:46.877252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:55:46.324 [2024-11-26 17:52:46.877262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:55:46.324 [2024-11-26 17:52:46.877272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:55:46.324 [2024-11-26 17:52:46.877282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:55:46.324 [2024-11-26 17:52:46.877301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:55:46.324 [2024-11-26 17:52:46.877330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:55:46.324 [2024-11-26 17:52:46.877358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:55:46.324 [2024-11-26 17:52:46.877384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:55:46.324 [2024-11-26 17:52:46.877420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:55:46.324 [2024-11-26 17:52:46.877448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:55:46.324 [2024-11-26 17:52:46.877472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:55:46.324 [2024-11-26 17:52:46.877481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:55:46.324 [2024-11-26 17:52:46.877491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:55:46.324 [2024-11-26 17:52:46.877517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:55:46.324 [2024-11-26 17:52:46.877527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:55:46.324 [2024-11-26 17:52:46.877536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:55:46.324 [2024-11-26 17:52:46.877561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:55:46.324 [2024-11-26 17:52:46.877572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877581] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:55:46.324 [2024-11-26 17:52:46.877595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:55:46.324 [2024-11-26 17:52:46.877613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:55:46.324 [2024-11-26 17:52:46.877637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:55:46.324 [2024-11-26 17:52:46.877650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:55:46.324 [2024-11-26 17:52:46.877659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:55:46.324 [2024-11-26 17:52:46.877674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:55:46.324 [2024-11-26 17:52:46.877683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:55:46.324 [2024-11-26 17:52:46.877693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:55:46.324 [2024-11-26 17:52:46.877708] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:55:46.324 [2024-11-26 17:52:46.877722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:55:46.324 [2024-11-26 17:52:46.877734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:55:46.324 [2024-11-26 17:52:46.877747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:55:46.324 [2024-11-26 17:52:46.877758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:55:46.325 [2024-11-26 17:52:46.877769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:55:46.325 [2024-11-26 17:52:46.877780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:55:46.325 [2024-11-26 17:52:46.877791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:55:46.325 [2024-11-26 17:52:46.877802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:55:46.325 [2024-11-26 17:52:46.877813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:55:46.325 [2024-11-26 17:52:46.877824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:55:46.325 [2024-11-26 17:52:46.877834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:55:46.325 [2024-11-26 17:52:46.877885] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:55:46.325 [2024-11-26 17:52:46.877897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:55:46.325 [2024-11-26 17:52:46.877919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:55:46.325 [2024-11-26 17:52:46.877929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:55:46.325 [2024-11-26 17:52:46.877942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:55:46.325 [2024-11-26 17:52:46.877953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:46.877964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:55:46.325 [2024-11-26 17:52:46.877974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:55:46.325 [2024-11-26 17:52:46.877985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:46.930401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:46.930465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:55:46.325 [2024-11-26 17:52:46.930483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.436 ms 00:55:46.325 [2024-11-26 17:52:46.930507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:46.930635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:46.930647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:55:46.325 [2024-11-26 17:52:46.930660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:55:46.325 [2024-11-26 17:52:46.930671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:47.004712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:47.004776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:55:46.325 [2024-11-26 17:52:47.004800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.042 ms 00:55:46.325 [2024-11-26 17:52:47.004812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:47.004897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:47.004909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:55:46.325 [2024-11-26 17:52:47.004921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:55:46.325 [2024-11-26 17:52:47.004932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:47.005796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:47.005814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:55:46.325 [2024-11-26 17:52:47.005826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:55:46.325 [2024-11-26 17:52:47.005842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.325 [2024-11-26 17:52:47.005992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.325 [2024-11-26 17:52:47.006007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:55:46.325 [2024-11-26 17:52:47.006019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:55:46.325 [2024-11-26 17:52:47.006030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.029828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.029892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:55:46.584 [2024-11-26 17:52:47.029911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.810 ms 00:55:46.584 [2024-11-26 17:52:47.029924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.050100] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:55:46.584 [2024-11-26 17:52:47.050305] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:55:46.584 [2024-11-26 17:52:47.050333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.050345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:55:46.584 [2024-11-26 17:52:47.050361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.252 ms 00:55:46.584 [2024-11-26 17:52:47.050372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.082318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.082392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:55:46.584 [2024-11-26 17:52:47.082412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.910 ms 00:55:46.584 [2024-11-26 17:52:47.082424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.103095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.103173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:55:46.584 [2024-11-26 17:52:47.103191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.609 ms 00:55:46.584 [2024-11-26 17:52:47.103203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.122896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.122952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:55:46.584 [2024-11-26 17:52:47.122970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.664 ms 00:55:46.584 [2024-11-26 17:52:47.122981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.123929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.123960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:55:46.584 [2024-11-26 17:52:47.123974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:55:46.584 [2024-11-26 17:52:47.123987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.231284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.231390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:55:46.584 [2024-11-26 17:52:47.231413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.433 ms 00:55:46.584 [2024-11-26 17:52:47.231426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.245306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:55:46.584 [2024-11-26 17:52:47.251465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.251679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:55:46.584 [2024-11-26 17:52:47.251727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.965 ms 00:55:46.584 [2024-11-26 17:52:47.251750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.251917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.251933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:55:46.584 [2024-11-26 17:52:47.251946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:55:46.584 [2024-11-26 17:52:47.251959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.584 [2024-11-26 17:52:47.252091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.584 [2024-11-26 17:52:47.252105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:55:46.584 [2024-11-26 17:52:47.252118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:55:46.585 [2024-11-26 17:52:47.252130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.585 [2024-11-26 17:52:47.252167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.585 [2024-11-26 17:52:47.252180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:55:46.585 [2024-11-26 17:52:47.252192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:55:46.585 [2024-11-26 17:52:47.252204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.585 [2024-11-26 17:52:47.252250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:55:46.585 [2024-11-26 17:52:47.252265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.585 [2024-11-26 17:52:47.252277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:55:46.585 [2024-11-26 17:52:47.252288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:55:46.585 [2024-11-26 17:52:47.252304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.843 [2024-11-26 17:52:47.294719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.843 [2024-11-26 17:52:47.294992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:55:46.843 [2024-11-26 17:52:47.295023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.458 ms 00:55:46.843 [2024-11-26 17:52:47.295037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.843 [2024-11-26 17:52:47.295158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:55:46.843 [2024-11-26 17:52:47.295171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:55:46.843 [2024-11-26 17:52:47.295184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:55:46.843 [2024-11-26 17:52:47.295195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:55:46.843 [2024-11-26 17:52:47.297140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 465.591 ms, result 0 00:55:47.777  [2024-11-26T17:52:49.430Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-26T17:52:50.363Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-26T17:52:51.739Z] Copying: 78/1024 [MB] (25 MBps) [2024-11-26T17:52:52.306Z] Copying: 106/1024 [MB] (27 MBps) [2024-11-26T17:52:53.686Z] Copying: 132/1024 [MB] (25 MBps) [2024-11-26T17:52:54.621Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-26T17:52:55.557Z] Copying: 186/1024 [MB] (28 MBps) [2024-11-26T17:52:56.492Z] Copying: 214/1024 [MB] (27 MBps) [2024-11-26T17:52:57.429Z] Copying: 241/1024 [MB] (27 MBps) [2024-11-26T17:52:58.367Z] Copying: 268/1024 [MB] (26 MBps) [2024-11-26T17:52:59.304Z] Copying: 295/1024 [MB] (26 MBps) [2024-11-26T17:53:00.328Z] Copying: 320/1024 [MB] (25 MBps) [2024-11-26T17:53:01.707Z] Copying: 346/1024 [MB] (25 MBps) [2024-11-26T17:53:02.644Z] Copying: 372/1024 [MB] (25 MBps) [2024-11-26T17:53:03.582Z] Copying: 398/1024 [MB] (26 MBps) [2024-11-26T17:53:04.520Z] Copying: 424/1024 [MB] (25 MBps) [2024-11-26T17:53:05.458Z] Copying: 449/1024 [MB] (25 MBps) [2024-11-26T17:53:06.395Z] Copying: 475/1024 [MB] (25 MBps) [2024-11-26T17:53:07.333Z] Copying: 501/1024 [MB] (26 MBps) [2024-11-26T17:53:08.713Z] Copying: 528/1024 [MB] (26 MBps) [2024-11-26T17:53:09.281Z] Copying: 555/1024 [MB] (27 MBps) [2024-11-26T17:53:10.659Z] Copying: 582/1024 [MB] (26 MBps) [2024-11-26T17:53:11.597Z] Copying: 608/1024 [MB] (26 MBps) [2024-11-26T17:53:12.535Z] Copying: 635/1024 [MB] (27 MBps) [2024-11-26T17:53:13.472Z] Copying: 662/1024 [MB] (26 MBps) [2024-11-26T17:53:14.457Z] Copying: 688/1024 [MB] (26 MBps) [2024-11-26T17:53:15.395Z] Copying: 722/1024 [MB] (33 MBps) [2024-11-26T17:53:16.330Z] Copying: 749/1024 [MB] (27 MBps) [2024-11-26T17:53:17.265Z] Copying: 775/1024 [MB] (26 MBps) [2024-11-26T17:53:18.642Z] Copying: 802/1024 [MB] (26 MBps) [2024-11-26T17:53:19.577Z] Copying: 828/1024 [MB] (26 MBps) [2024-11-26T17:53:20.514Z] Copying: 855/1024 [MB] (26 MBps) [2024-11-26T17:53:21.449Z] Copying: 881/1024 [MB] (26 MBps) [2024-11-26T17:53:22.387Z] Copying: 908/1024 [MB] (26 MBps) [2024-11-26T17:53:23.338Z] Copying: 934/1024 [MB] (26 MBps) [2024-11-26T17:53:24.275Z] Copying: 960/1024 [MB] (26 MBps) [2024-11-26T17:53:25.654Z] Copying: 987/1024 [MB] (26 MBps) [2024-11-26T17:53:26.587Z] Copying: 1013/1024 [MB] (26 MBps) [2024-11-26T17:53:26.587Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-26T17:53:26.587Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 17:53:26.376685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.376807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:56:25.893 [2024-11-26 17:53:26.376828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:56:25.893 [2024-11-26 17:53:26.376857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.379391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:56:25.893 [2024-11-26 17:53:26.385154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.385302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:56:25.893 [2024-11-26 17:53:26.385418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.587 ms 00:56:25.893 [2024-11-26 17:53:26.385469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.396095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.396237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:56:25.893 [2024-11-26 17:53:26.396317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.928 ms 00:56:25.893 [2024-11-26 17:53:26.396355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.420281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.420457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:56:25.893 [2024-11-26 17:53:26.420564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.919 ms 00:56:25.893 [2024-11-26 17:53:26.420605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.425617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.425754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:56:25.893 [2024-11-26 17:53:26.425871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.940 ms 00:56:25.893 [2024-11-26 17:53:26.425909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.464075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.464242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:56:25.893 [2024-11-26 17:53:26.464324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.154 ms 00:56:25.893 [2024-11-26 17:53:26.464361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:25.893 [2024-11-26 17:53:26.486158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:25.893 [2024-11-26 17:53:26.486317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:56:25.893 [2024-11-26 17:53:26.486399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.772 ms 00:56:25.893 [2024-11-26 17:53:26.486434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.603314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.152 [2024-11-26 17:53:26.603553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:56:26.152 [2024-11-26 17:53:26.603647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.940 ms 00:56:26.152 [2024-11-26 17:53:26.603695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.642168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.152 [2024-11-26 17:53:26.642351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:56:26.152 [2024-11-26 17:53:26.642375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.485 ms 00:56:26.152 [2024-11-26 17:53:26.642403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.679477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.152 [2024-11-26 17:53:26.679534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:56:26.152 [2024-11-26 17:53:26.679553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.059 ms 00:56:26.152 [2024-11-26 17:53:26.679565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.717259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.152 [2024-11-26 17:53:26.717447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:56:26.152 [2024-11-26 17:53:26.717472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.710 ms 00:56:26.152 [2024-11-26 17:53:26.717483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.753390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.152 [2024-11-26 17:53:26.753437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:56:26.152 [2024-11-26 17:53:26.753453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.856 ms 00:56:26.152 [2024-11-26 17:53:26.753464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.152 [2024-11-26 17:53:26.753524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:56:26.152 [2024-11-26 17:53:26.753545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:56:26.152 [2024-11-26 17:53:26.753560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:56:26.152 [2024-11-26 17:53:26.753709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.753996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:56:26.153 [2024-11-26 17:53:26.754511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:56:26.154 [2024-11-26 17:53:26.754666] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:56:26.154 [2024-11-26 17:53:26.754677] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50c977a4-dc9e-442a-be46-ef9fda80b8fe 00:56:26.154 [2024-11-26 17:53:26.754709] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:56:26.154 [2024-11-26 17:53:26.754720] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:56:26.154 [2024-11-26 17:53:26.754730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:56:26.154 [2024-11-26 17:53:26.754742] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:56:26.154 [2024-11-26 17:53:26.754752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:56:26.154 [2024-11-26 17:53:26.754764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:56:26.154 [2024-11-26 17:53:26.754775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:56:26.154 [2024-11-26 17:53:26.754784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:56:26.154 [2024-11-26 17:53:26.754794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:56:26.154 [2024-11-26 17:53:26.754804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.154 [2024-11-26 17:53:26.754816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:56:26.154 [2024-11-26 17:53:26.754827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:56:26.154 [2024-11-26 17:53:26.754837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.776642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.154 [2024-11-26 17:53:26.776689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:56:26.154 [2024-11-26 17:53:26.776705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.802 ms 00:56:26.154 [2024-11-26 17:53:26.776716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.777345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:26.154 [2024-11-26 17:53:26.777361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:56:26.154 [2024-11-26 17:53:26.777381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:56:26.154 [2024-11-26 17:53:26.777391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.834526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.154 [2024-11-26 17:53:26.834586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:56:26.154 [2024-11-26 17:53:26.834603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.154 [2024-11-26 17:53:26.834614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.834707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.154 [2024-11-26 17:53:26.834719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:56:26.154 [2024-11-26 17:53:26.834736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.154 [2024-11-26 17:53:26.834747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.834875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.154 [2024-11-26 17:53:26.834890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:56:26.154 [2024-11-26 17:53:26.834902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.154 [2024-11-26 17:53:26.834913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.154 [2024-11-26 17:53:26.834934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.154 [2024-11-26 17:53:26.834945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:56:26.154 [2024-11-26 17:53:26.834956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.154 [2024-11-26 17:53:26.834967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.412 [2024-11-26 17:53:26.974967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.412 [2024-11-26 17:53:26.975037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:56:26.412 [2024-11-26 17:53:26.975056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.412 [2024-11-26 17:53:26.975069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.412 [2024-11-26 17:53:27.083869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.412 [2024-11-26 17:53:27.083953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:56:26.412 [2024-11-26 17:53:27.083971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.412 [2024-11-26 17:53:27.083990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.412 [2024-11-26 17:53:27.084124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.412 [2024-11-26 17:53:27.084137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:56:26.462 [2024-11-26 17:53:27.084149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.462 [2024-11-26 17:53:27.084223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:56:26.462 [2024-11-26 17:53:27.084235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.462 [2024-11-26 17:53:27.084402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:56:26.462 [2024-11-26 17:53:27.084414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.462 [2024-11-26 17:53:27.084478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:56:26.462 [2024-11-26 17:53:27.084490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.462 [2024-11-26 17:53:27.084590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:56:26.462 [2024-11-26 17:53:27.084616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:56:26.462 [2024-11-26 17:53:27.084692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:56:26.462 [2024-11-26 17:53:27.084704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:56:26.462 [2024-11-26 17:53:27.084714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:26.462 [2024-11-26 17:53:27.084866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 711.840 ms, result 0 00:56:28.362 00:56:28.362 00:56:28.362 17:53:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:56:30.267 17:53:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:56:30.267 [2024-11-26 17:53:30.560003] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:56:30.267 [2024-11-26 17:53:30.560155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82667 ] 00:56:30.267 [2024-11-26 17:53:30.750784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:30.267 [2024-11-26 17:53:30.900126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:30.842 [2024-11-26 17:53:31.344422] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:56:30.842 [2024-11-26 17:53:31.344521] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:56:30.842 [2024-11-26 17:53:31.512555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:30.842 [2024-11-26 17:53:31.512642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:56:30.842 [2024-11-26 17:53:31.512662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:56:30.842 [2024-11-26 17:53:31.512674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:30.842 [2024-11-26 17:53:31.512745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:30.842 [2024-11-26 17:53:31.512762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:56:30.842 [2024-11-26 17:53:31.512774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:56:30.842 [2024-11-26 17:53:31.512784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:30.842 [2024-11-26 17:53:31.512809] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:56:30.842 [2024-11-26 17:53:31.513882] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:56:30.842 [2024-11-26 17:53:31.513907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:30.842 [2024-11-26 17:53:31.513919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:56:30.842 [2024-11-26 17:53:31.513931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:56:30.842 [2024-11-26 17:53:31.513941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:30.842 [2024-11-26 17:53:31.516367] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:56:31.102 [2024-11-26 17:53:31.536652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.536705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:56:31.102 [2024-11-26 17:53:31.536723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.318 ms 00:56:31.102 [2024-11-26 17:53:31.536735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.536829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.536843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:56:31.102 [2024-11-26 17:53:31.536855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:56:31.102 [2024-11-26 17:53:31.536866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.549762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.549801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:56:31.102 [2024-11-26 17:53:31.549816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.830 ms 00:56:31.102 [2024-11-26 17:53:31.549833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.549933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.549948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:56:31.102 [2024-11-26 17:53:31.549960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:56:31.102 [2024-11-26 17:53:31.549971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.550040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.550053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:56:31.102 [2024-11-26 17:53:31.550065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:56:31.102 [2024-11-26 17:53:31.550076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.550115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:56:31.102 [2024-11-26 17:53:31.556046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.556081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:56:31.102 [2024-11-26 17:53:31.556100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.951 ms 00:56:31.102 [2024-11-26 17:53:31.556110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.556146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.102 [2024-11-26 17:53:31.556158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:56:31.102 [2024-11-26 17:53:31.556170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:56:31.102 [2024-11-26 17:53:31.556181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.102 [2024-11-26 17:53:31.556222] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:56:31.102 [2024-11-26 17:53:31.556249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:56:31.102 [2024-11-26 17:53:31.556290] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:56:31.102 [2024-11-26 17:53:31.556314] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:56:31.102 [2024-11-26 17:53:31.556410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:56:31.102 [2024-11-26 17:53:31.556425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:56:31.102 [2024-11-26 17:53:31.556440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:56:31.102 [2024-11-26 17:53:31.556454] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:56:31.102 [2024-11-26 17:53:31.556467] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:56:31.102 [2024-11-26 17:53:31.556480] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:56:31.102 [2024-11-26 17:53:31.556491] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:56:31.102 [2024-11-26 17:53:31.556523] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:56:31.103 [2024-11-26 17:53:31.556534] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:56:31.103 [2024-11-26 17:53:31.556546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.556557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:56:31.103 [2024-11-26 17:53:31.556569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:56:31.103 [2024-11-26 17:53:31.556580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.556655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.556666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:56:31.103 [2024-11-26 17:53:31.556678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:56:31.103 [2024-11-26 17:53:31.556688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.556795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:56:31.103 [2024-11-26 17:53:31.556811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:56:31.103 [2024-11-26 17:53:31.556823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:56:31.103 [2024-11-26 17:53:31.556834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.556846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:56:31.103 [2024-11-26 17:53:31.556856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.556866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:56:31.103 [2024-11-26 17:53:31.556875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:56:31.103 [2024-11-26 17:53:31.556885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:56:31.103 [2024-11-26 17:53:31.556895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:56:31.103 [2024-11-26 17:53:31.556912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:56:31.103 [2024-11-26 17:53:31.556921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:56:31.103 [2024-11-26 17:53:31.556931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:56:31.103 [2024-11-26 17:53:31.556953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:56:31.103 [2024-11-26 17:53:31.556963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:56:31.103 [2024-11-26 17:53:31.556973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.556983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:56:31.103 [2024-11-26 17:53:31.556992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:56:31.103 [2024-11-26 17:53:31.557022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:56:31.103 [2024-11-26 17:53:31.557051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:56:31.103 [2024-11-26 17:53:31.557078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:56:31.103 [2024-11-26 17:53:31.557106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:56:31.103 [2024-11-26 17:53:31.557134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:56:31.103 [2024-11-26 17:53:31.557151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:56:31.103 [2024-11-26 17:53:31.557160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:56:31.103 [2024-11-26 17:53:31.557169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:56:31.103 [2024-11-26 17:53:31.557178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:56:31.103 [2024-11-26 17:53:31.557187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:56:31.103 [2024-11-26 17:53:31.557196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:56:31.103 [2024-11-26 17:53:31.557214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:56:31.103 [2024-11-26 17:53:31.557225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:56:31.103 [2024-11-26 17:53:31.557245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:56:31.103 [2024-11-26 17:53:31.557255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:56:31.103 [2024-11-26 17:53:31.557276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:56:31.103 [2024-11-26 17:53:31.557286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:56:31.103 [2024-11-26 17:53:31.557295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:56:31.103 [2024-11-26 17:53:31.557305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:56:31.103 [2024-11-26 17:53:31.557314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:56:31.103 [2024-11-26 17:53:31.557324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:56:31.103 [2024-11-26 17:53:31.557335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:56:31.103 [2024-11-26 17:53:31.557348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:56:31.103 [2024-11-26 17:53:31.557376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:56:31.103 [2024-11-26 17:53:31.557387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:56:31.103 [2024-11-26 17:53:31.557397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:56:31.103 [2024-11-26 17:53:31.557407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:56:31.103 [2024-11-26 17:53:31.557418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:56:31.103 [2024-11-26 17:53:31.557428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:56:31.103 [2024-11-26 17:53:31.557439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:56:31.103 [2024-11-26 17:53:31.557449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:56:31.103 [2024-11-26 17:53:31.557460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:56:31.103 [2024-11-26 17:53:31.557523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:56:31.103 [2024-11-26 17:53:31.557535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:56:31.103 [2024-11-26 17:53:31.557558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:56:31.103 [2024-11-26 17:53:31.557569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:56:31.103 [2024-11-26 17:53:31.557581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:56:31.103 [2024-11-26 17:53:31.557592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.557604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:56:31.103 [2024-11-26 17:53:31.557614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:56:31.103 [2024-11-26 17:53:31.557624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.605295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.605358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:56:31.103 [2024-11-26 17:53:31.605377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.692 ms 00:56:31.103 [2024-11-26 17:53:31.605395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.605522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.605536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:56:31.103 [2024-11-26 17:53:31.605549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:56:31.103 [2024-11-26 17:53:31.605561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.671916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.103 [2024-11-26 17:53:31.672191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:56:31.103 [2024-11-26 17:53:31.672223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.322 ms 00:56:31.103 [2024-11-26 17:53:31.672234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.103 [2024-11-26 17:53:31.672323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.672342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:56:31.104 [2024-11-26 17:53:31.672355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:56:31.104 [2024-11-26 17:53:31.672366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.673430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.673451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:56:31.104 [2024-11-26 17:53:31.673464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:56:31.104 [2024-11-26 17:53:31.673474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.673642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.673657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:56:31.104 [2024-11-26 17:53:31.673675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:56:31.104 [2024-11-26 17:53:31.673686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.699621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.699681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:56:31.104 [2024-11-26 17:53:31.699700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.952 ms 00:56:31.104 [2024-11-26 17:53:31.699712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.721937] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:56:31.104 [2024-11-26 17:53:31.722011] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:56:31.104 [2024-11-26 17:53:31.722030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.722041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:56:31.104 [2024-11-26 17:53:31.722055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.171 ms 00:56:31.104 [2024-11-26 17:53:31.722067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.753529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.753775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:56:31.104 [2024-11-26 17:53:31.753804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.454 ms 00:56:31.104 [2024-11-26 17:53:31.753817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.774424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.774487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:56:31.104 [2024-11-26 17:53:31.774515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.569 ms 00:56:31.104 [2024-11-26 17:53:31.774527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.104 [2024-11-26 17:53:31.794470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.104 [2024-11-26 17:53:31.794536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:56:31.104 [2024-11-26 17:53:31.794554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.917 ms 00:56:31.104 [2024-11-26 17:53:31.794566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.795522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.795556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:56:31.364 [2024-11-26 17:53:31.795575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:56:31.364 [2024-11-26 17:53:31.795587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.900070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.900158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:56:31.364 [2024-11-26 17:53:31.900186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.617 ms 00:56:31.364 [2024-11-26 17:53:31.900199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.915827] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:56:31.364 [2024-11-26 17:53:31.921073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.921117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:56:31.364 [2024-11-26 17:53:31.921136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.812 ms 00:56:31.364 [2024-11-26 17:53:31.921148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.921304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.921320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:56:31.364 [2024-11-26 17:53:31.921337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:56:31.364 [2024-11-26 17:53:31.921348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.923717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.923782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:56:31.364 [2024-11-26 17:53:31.923797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:56:31.364 [2024-11-26 17:53:31.923808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.923857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.923869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:56:31.364 [2024-11-26 17:53:31.923881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:56:31.364 [2024-11-26 17:53:31.923891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.923942] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:56:31.364 [2024-11-26 17:53:31.923955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.923966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:56:31.364 [2024-11-26 17:53:31.923977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:56:31.364 [2024-11-26 17:53:31.923988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.963487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.963557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:56:31.364 [2024-11-26 17:53:31.963586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.538 ms 00:56:31.364 [2024-11-26 17:53:31.963598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.963704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:56:31.364 [2024-11-26 17:53:31.963719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:56:31.364 [2024-11-26 17:53:31.963731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:56:31.364 [2024-11-26 17:53:31.963742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:56:31.364 [2024-11-26 17:53:31.965363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.948 ms, result 0 00:56:32.770  [2024-11-26T17:53:34.399Z] Copying: 1208/1048576 [kB] (1208 kBps) [2024-11-26T17:53:35.335Z] Copying: 9780/1048576 [kB] (8572 kBps) [2024-11-26T17:53:36.273Z] Copying: 43/1024 [MB] (33 MBps) [2024-11-26T17:53:37.212Z] Copying: 76/1024 [MB] (33 MBps) [2024-11-26T17:53:38.589Z] Copying: 111/1024 [MB] (34 MBps) [2024-11-26T17:53:39.188Z] Copying: 144/1024 [MB] (33 MBps) [2024-11-26T17:53:40.564Z] Copying: 178/1024 [MB] (33 MBps) [2024-11-26T17:53:41.501Z] Copying: 211/1024 [MB] (33 MBps) [2024-11-26T17:53:42.437Z] Copying: 246/1024 [MB] (34 MBps) [2024-11-26T17:53:43.373Z] Copying: 280/1024 [MB] (34 MBps) [2024-11-26T17:53:44.307Z] Copying: 314/1024 [MB] (33 MBps) [2024-11-26T17:53:45.243Z] Copying: 348/1024 [MB] (34 MBps) [2024-11-26T17:53:46.179Z] Copying: 382/1024 [MB] (34 MBps) [2024-11-26T17:53:47.556Z] Copying: 417/1024 [MB] (34 MBps) [2024-11-26T17:53:48.490Z] Copying: 450/1024 [MB] (33 MBps) [2024-11-26T17:53:49.425Z] Copying: 483/1024 [MB] (33 MBps) [2024-11-26T17:53:50.359Z] Copying: 517/1024 [MB] (34 MBps) [2024-11-26T17:53:51.295Z] Copying: 552/1024 [MB] (34 MBps) [2024-11-26T17:53:52.232Z] Copying: 586/1024 [MB] (33 MBps) [2024-11-26T17:53:53.270Z] Copying: 619/1024 [MB] (33 MBps) [2024-11-26T17:53:54.206Z] Copying: 654/1024 [MB] (34 MBps) [2024-11-26T17:53:55.582Z] Copying: 688/1024 [MB] (34 MBps) [2024-11-26T17:53:56.518Z] Copying: 722/1024 [MB] (33 MBps) [2024-11-26T17:53:57.454Z] Copying: 756/1024 [MB] (34 MBps) [2024-11-26T17:53:58.390Z] Copying: 789/1024 [MB] (33 MBps) [2024-11-26T17:53:59.338Z] Copying: 822/1024 [MB] (33 MBps) [2024-11-26T17:54:00.269Z] Copying: 855/1024 [MB] (33 MBps) [2024-11-26T17:54:01.200Z] Copying: 888/1024 [MB] (32 MBps) [2024-11-26T17:54:02.600Z] Copying: 921/1024 [MB] (33 MBps) [2024-11-26T17:54:03.169Z] Copying: 954/1024 [MB] (32 MBps) [2024-11-26T17:54:04.548Z] Copying: 987/1024 [MB] (32 MBps) [2024-11-26T17:54:04.548Z] Copying: 1019/1024 [MB] (32 MBps) [2024-11-26T17:54:04.548Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-26 17:54:04.350842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.350966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:57:03.854 [2024-11-26 17:54:04.351000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:57:03.854 [2024-11-26 17:54:04.351023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.351076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:57:03.854 [2024-11-26 17:54:04.358446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.358543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:57:03.854 [2024-11-26 17:54:04.358570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.340 ms 00:57:03.854 [2024-11-26 17:54:04.358588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.358948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.358975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:57:03.854 [2024-11-26 17:54:04.358994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:57:03.854 [2024-11-26 17:54:04.359010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.372644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.373052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:57:03.854 [2024-11-26 17:54:04.373093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.619 ms 00:57:03.854 [2024-11-26 17:54:04.373106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.378508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.378573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:57:03.854 [2024-11-26 17:54:04.378603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.342 ms 00:57:03.854 [2024-11-26 17:54:04.378615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.423618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.423976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:57:03.854 [2024-11-26 17:54:04.424009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.001 ms 00:57:03.854 [2024-11-26 17:54:04.424021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.449558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.449900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:57:03.854 [2024-11-26 17:54:04.449935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.480 ms 00:57:03.854 [2024-11-26 17:54:04.449948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.452430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.452477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:57:03.854 [2024-11-26 17:54:04.452493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.365 ms 00:57:03.854 [2024-11-26 17:54:04.452533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.497406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.497798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:57:03.854 [2024-11-26 17:54:04.497833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.914 ms 00:57:03.854 [2024-11-26 17:54:04.497846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:03.854 [2024-11-26 17:54:04.537368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:03.854 [2024-11-26 17:54:04.537730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:57:03.854 [2024-11-26 17:54:04.537762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.472 ms 00:57:03.854 [2024-11-26 17:54:04.537775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.114 [2024-11-26 17:54:04.581545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:04.114 [2024-11-26 17:54:04.581667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:57:04.114 [2024-11-26 17:54:04.581689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.735 ms 00:57:04.114 [2024-11-26 17:54:04.581700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.114 [2024-11-26 17:54:04.625849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:04.114 [2024-11-26 17:54:04.625947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:57:04.114 [2024-11-26 17:54:04.625967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.985 ms 00:57:04.114 [2024-11-26 17:54:04.625979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.114 [2024-11-26 17:54:04.626080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:57:04.114 [2024-11-26 17:54:04.626104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:57:04.114 [2024-11-26 17:54:04.626119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:57:04.114 [2024-11-26 17:54:04.626133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:57:04.114 [2024-11-26 17:54:04.626558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.626990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:57:04.115 [2024-11-26 17:54:04.627340] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:57:04.115 [2024-11-26 17:54:04.627351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50c977a4-dc9e-442a-be46-ef9fda80b8fe 00:57:04.115 [2024-11-26 17:54:04.627372] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:57:04.115 [2024-11-26 17:54:04.627384] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156352 00:57:04.115 [2024-11-26 17:54:04.627401] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154368 00:57:04.115 [2024-11-26 17:54:04.627413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:57:04.115 [2024-11-26 17:54:04.627424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:57:04.115 [2024-11-26 17:54:04.627452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:57:04.115 [2024-11-26 17:54:04.627462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:57:04.115 [2024-11-26 17:54:04.627472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:57:04.115 [2024-11-26 17:54:04.627481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:57:04.115 [2024-11-26 17:54:04.627493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:04.115 [2024-11-26 17:54:04.627515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:57:04.115 [2024-11-26 17:54:04.627527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:57:04.115 [2024-11-26 17:54:04.627538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.649885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:04.115 [2024-11-26 17:54:04.649976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:57:04.115 [2024-11-26 17:54:04.649995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.306 ms 00:57:04.115 [2024-11-26 17:54:04.650007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.650685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:04.115 [2024-11-26 17:54:04.650703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:57:04.115 [2024-11-26 17:54:04.650717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:57:04.115 [2024-11-26 17:54:04.650728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.705801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.115 [2024-11-26 17:54:04.705888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:04.115 [2024-11-26 17:54:04.705908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.115 [2024-11-26 17:54:04.705922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.706029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.115 [2024-11-26 17:54:04.706042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:04.115 [2024-11-26 17:54:04.706054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.115 [2024-11-26 17:54:04.706065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.706235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.115 [2024-11-26 17:54:04.706251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:04.115 [2024-11-26 17:54:04.706263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.115 [2024-11-26 17:54:04.706273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.115 [2024-11-26 17:54:04.706293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.116 [2024-11-26 17:54:04.706305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:04.116 [2024-11-26 17:54:04.706316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.116 [2024-11-26 17:54:04.706327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.846105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.846186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:04.375 [2024-11-26 17:54:04.846206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.846218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.959453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.959539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:04.375 [2024-11-26 17:54:04.959557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.959569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.959730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.959749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:04.375 [2024-11-26 17:54:04.959761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.959773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.959836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.959849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:04.375 [2024-11-26 17:54:04.959861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.959872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.960012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.960027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:04.375 [2024-11-26 17:54:04.960045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.960055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.960099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.960111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:57:04.375 [2024-11-26 17:54:04.960123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.960133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.960186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.960200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:04.375 [2024-11-26 17:54:04.960215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.960226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.960284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:04.375 [2024-11-26 17:54:04.960297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:04.375 [2024-11-26 17:54:04.960310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:04.375 [2024-11-26 17:54:04.960320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:04.375 [2024-11-26 17:54:04.960491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.630 ms, result 0 00:57:05.753 00:57:05.753 00:57:05.753 17:54:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:57:07.760 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:57:07.760 17:54:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:07.760 [2024-11-26 17:54:08.152099] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:57:07.760 [2024-11-26 17:54:08.152271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83040 ] 00:57:07.760 [2024-11-26 17:54:08.340643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:08.019 [2024-11-26 17:54:08.493071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:08.277 [2024-11-26 17:54:08.928713] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:57:08.277 [2024-11-26 17:54:08.928810] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:57:08.538 [2024-11-26 17:54:09.097299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.097397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:57:08.538 [2024-11-26 17:54:09.097417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:57:08.538 [2024-11-26 17:54:09.097430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.097534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.097553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:08.538 [2024-11-26 17:54:09.097566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:57:08.538 [2024-11-26 17:54:09.097577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.097602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:57:08.538 [2024-11-26 17:54:09.098745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:57:08.538 [2024-11-26 17:54:09.098783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.098797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:08.538 [2024-11-26 17:54:09.098810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:57:08.538 [2024-11-26 17:54:09.098820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.101260] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:57:08.538 [2024-11-26 17:54:09.124305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.124706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:57:08.538 [2024-11-26 17:54:09.124745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.076 ms 00:57:08.538 [2024-11-26 17:54:09.124758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.124908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.124924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:57:08.538 [2024-11-26 17:54:09.124938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:57:08.538 [2024-11-26 17:54:09.124950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.139634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.139974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:08.538 [2024-11-26 17:54:09.140011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:57:08.538 [2024-11-26 17:54:09.140035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.140173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.140187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:08.538 [2024-11-26 17:54:09.140200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:57:08.538 [2024-11-26 17:54:09.140211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.140323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.140337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:57:08.538 [2024-11-26 17:54:09.140350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:57:08.538 [2024-11-26 17:54:09.140361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.140401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:57:08.538 [2024-11-26 17:54:09.146969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.147019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:08.538 [2024-11-26 17:54:09.147040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.588 ms 00:57:08.538 [2024-11-26 17:54:09.147052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.147104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.147117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:57:08.538 [2024-11-26 17:54:09.147129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:57:08.538 [2024-11-26 17:54:09.147140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.147206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:57:08.538 [2024-11-26 17:54:09.147237] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:57:08.538 [2024-11-26 17:54:09.147280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:57:08.538 [2024-11-26 17:54:09.147305] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:57:08.538 [2024-11-26 17:54:09.147411] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:57:08.538 [2024-11-26 17:54:09.147425] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:57:08.538 [2024-11-26 17:54:09.147440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:57:08.538 [2024-11-26 17:54:09.147454] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:57:08.538 [2024-11-26 17:54:09.147468] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:57:08.538 [2024-11-26 17:54:09.147481] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:57:08.538 [2024-11-26 17:54:09.147493] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:57:08.538 [2024-11-26 17:54:09.147520] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:57:08.538 [2024-11-26 17:54:09.147531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:57:08.538 [2024-11-26 17:54:09.147542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.147554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:57:08.538 [2024-11-26 17:54:09.147565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:57:08.538 [2024-11-26 17:54:09.147576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.147659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.538 [2024-11-26 17:54:09.147672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:57:08.538 [2024-11-26 17:54:09.147684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:57:08.538 [2024-11-26 17:54:09.147694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.538 [2024-11-26 17:54:09.147815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:57:08.538 [2024-11-26 17:54:09.147834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:57:08.538 [2024-11-26 17:54:09.147847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:08.538 [2024-11-26 17:54:09.147858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.538 [2024-11-26 17:54:09.147872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:57:08.538 [2024-11-26 17:54:09.147883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:57:08.538 [2024-11-26 17:54:09.147893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:57:08.538 [2024-11-26 17:54:09.147905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:57:08.538 [2024-11-26 17:54:09.147915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:57:08.538 [2024-11-26 17:54:09.147925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:08.538 [2024-11-26 17:54:09.147936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:57:08.538 [2024-11-26 17:54:09.147945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:57:08.538 [2024-11-26 17:54:09.147955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:57:08.538 [2024-11-26 17:54:09.147979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:57:08.538 [2024-11-26 17:54:09.147989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:57:08.538 [2024-11-26 17:54:09.147999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.538 [2024-11-26 17:54:09.148008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:57:08.539 [2024-11-26 17:54:09.148018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:57:08.539 [2024-11-26 17:54:09.148046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:57:08.539 [2024-11-26 17:54:09.148075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:57:08.539 [2024-11-26 17:54:09.148103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:57:08.539 [2024-11-26 17:54:09.148131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:57:08.539 [2024-11-26 17:54:09.148158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:08.539 [2024-11-26 17:54:09.148176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:57:08.539 [2024-11-26 17:54:09.148185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:57:08.539 [2024-11-26 17:54:09.148195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:57:08.539 [2024-11-26 17:54:09.148204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:57:08.539 [2024-11-26 17:54:09.148214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:57:08.539 [2024-11-26 17:54:09.148223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:57:08.539 [2024-11-26 17:54:09.148240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:57:08.539 [2024-11-26 17:54:09.148250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:57:08.539 [2024-11-26 17:54:09.148270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:57:08.539 [2024-11-26 17:54:09.148280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:57:08.539 [2024-11-26 17:54:09.148301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:57:08.539 [2024-11-26 17:54:09.148311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:57:08.539 [2024-11-26 17:54:09.148320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:57:08.539 [2024-11-26 17:54:09.148330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:57:08.539 [2024-11-26 17:54:09.148339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:57:08.539 [2024-11-26 17:54:09.148349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:57:08.539 [2024-11-26 17:54:09.148361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:57:08.539 [2024-11-26 17:54:09.148374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:57:08.539 [2024-11-26 17:54:09.148402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:57:08.539 [2024-11-26 17:54:09.148413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:57:08.539 [2024-11-26 17:54:09.148424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:57:08.539 [2024-11-26 17:54:09.148435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:57:08.539 [2024-11-26 17:54:09.148447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:57:08.539 [2024-11-26 17:54:09.148458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:57:08.539 [2024-11-26 17:54:09.148469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:57:08.539 [2024-11-26 17:54:09.148479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:57:08.539 [2024-11-26 17:54:09.148490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:57:08.539 [2024-11-26 17:54:09.148558] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:57:08.539 [2024-11-26 17:54:09.148570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:57:08.539 [2024-11-26 17:54:09.148596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:57:08.539 [2024-11-26 17:54:09.148607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:57:08.539 [2024-11-26 17:54:09.148619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:57:08.539 [2024-11-26 17:54:09.148631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.539 [2024-11-26 17:54:09.148643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:57:08.539 [2024-11-26 17:54:09.148663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:57:08.539 [2024-11-26 17:54:09.148674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.539 [2024-11-26 17:54:09.198482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.539 [2024-11-26 17:54:09.198569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:08.539 [2024-11-26 17:54:09.198589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.819 ms 00:57:08.539 [2024-11-26 17:54:09.198608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.539 [2024-11-26 17:54:09.198742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.539 [2024-11-26 17:54:09.198756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:57:08.539 [2024-11-26 17:54:09.198768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:57:08.539 [2024-11-26 17:54:09.198779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.264814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.265138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:08.799 [2024-11-26 17:54:09.265172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.991 ms 00:57:08.799 [2024-11-26 17:54:09.265184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.265284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.265305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:08.799 [2024-11-26 17:54:09.265318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:57:08.799 [2024-11-26 17:54:09.265328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.266207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.266231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:08.799 [2024-11-26 17:54:09.266243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:57:08.799 [2024-11-26 17:54:09.266254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.266407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.266423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:08.799 [2024-11-26 17:54:09.266442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:57:08.799 [2024-11-26 17:54:09.266453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.290414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.290512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:08.799 [2024-11-26 17:54:09.290534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.971 ms 00:57:08.799 [2024-11-26 17:54:09.290547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.314350] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:57:08.799 [2024-11-26 17:54:09.314440] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:57:08.799 [2024-11-26 17:54:09.314463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.314477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:57:08.799 [2024-11-26 17:54:09.314508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.738 ms 00:57:08.799 [2024-11-26 17:54:09.314521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.348656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.348758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:57:08.799 [2024-11-26 17:54:09.348778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.079 ms 00:57:08.799 [2024-11-26 17:54:09.348792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.371825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.371986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:57:08.799 [2024-11-26 17:54:09.372009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.923 ms 00:57:08.799 [2024-11-26 17:54:09.372021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.394813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.395156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:57:08.799 [2024-11-26 17:54:09.395189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.725 ms 00:57:08.799 [2024-11-26 17:54:09.395202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:08.799 [2024-11-26 17:54:09.396198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:08.799 [2024-11-26 17:54:09.396234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:57:08.799 [2024-11-26 17:54:09.396254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:57:08.799 [2024-11-26 17:54:09.396265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.505707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.506064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:57:09.058 [2024-11-26 17:54:09.506110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.583 ms 00:57:09.058 [2024-11-26 17:54:09.506123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.524351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:57:09.058 [2024-11-26 17:54:09.529789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.529853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:57:09.058 [2024-11-26 17:54:09.529874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.582 ms 00:57:09.058 [2024-11-26 17:54:09.529886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.530068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.530085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:57:09.058 [2024-11-26 17:54:09.530104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:57:09.058 [2024-11-26 17:54:09.530116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.531612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.531791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:57:09.058 [2024-11-26 17:54:09.531817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:57:09.058 [2024-11-26 17:54:09.531829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.531890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.531903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:57:09.058 [2024-11-26 17:54:09.531916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:57:09.058 [2024-11-26 17:54:09.531928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.531981] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:57:09.058 [2024-11-26 17:54:09.531995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.532006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:57:09.058 [2024-11-26 17:54:09.532017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:57:09.058 [2024-11-26 17:54:09.532028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.577013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.577138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:57:09.058 [2024-11-26 17:54:09.577175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.026 ms 00:57:09.058 [2024-11-26 17:54:09.577188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.577352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:09.058 [2024-11-26 17:54:09.577367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:57:09.058 [2024-11-26 17:54:09.577380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:57:09.058 [2024-11-26 17:54:09.577392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:09.058 [2024-11-26 17:54:09.579082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 481.950 ms, result 0 00:57:10.435  [2024-11-26T17:54:12.065Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-26T17:54:13.000Z] Copying: 55/1024 [MB] (27 MBps) [2024-11-26T17:54:13.969Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-26T17:54:14.905Z] Copying: 111/1024 [MB] (28 MBps) [2024-11-26T17:54:15.893Z] Copying: 139/1024 [MB] (27 MBps) [2024-11-26T17:54:16.830Z] Copying: 166/1024 [MB] (27 MBps) [2024-11-26T17:54:18.221Z] Copying: 194/1024 [MB] (28 MBps) [2024-11-26T17:54:19.155Z] Copying: 223/1024 [MB] (28 MBps) [2024-11-26T17:54:20.090Z] Copying: 251/1024 [MB] (28 MBps) [2024-11-26T17:54:21.027Z] Copying: 279/1024 [MB] (28 MBps) [2024-11-26T17:54:21.965Z] Copying: 306/1024 [MB] (26 MBps) [2024-11-26T17:54:22.902Z] Copying: 334/1024 [MB] (27 MBps) [2024-11-26T17:54:23.839Z] Copying: 361/1024 [MB] (27 MBps) [2024-11-26T17:54:25.220Z] Copying: 389/1024 [MB] (27 MBps) [2024-11-26T17:54:25.802Z] Copying: 416/1024 [MB] (26 MBps) [2024-11-26T17:54:27.193Z] Copying: 443/1024 [MB] (27 MBps) [2024-11-26T17:54:28.128Z] Copying: 471/1024 [MB] (28 MBps) [2024-11-26T17:54:29.064Z] Copying: 498/1024 [MB] (27 MBps) [2024-11-26T17:54:30.002Z] Copying: 526/1024 [MB] (27 MBps) [2024-11-26T17:54:30.938Z] Copying: 554/1024 [MB] (27 MBps) [2024-11-26T17:54:31.875Z] Copying: 581/1024 [MB] (27 MBps) [2024-11-26T17:54:32.812Z] Copying: 609/1024 [MB] (27 MBps) [2024-11-26T17:54:34.195Z] Copying: 636/1024 [MB] (27 MBps) [2024-11-26T17:54:35.132Z] Copying: 664/1024 [MB] (27 MBps) [2024-11-26T17:54:36.069Z] Copying: 691/1024 [MB] (27 MBps) [2024-11-26T17:54:37.006Z] Copying: 718/1024 [MB] (27 MBps) [2024-11-26T17:54:37.942Z] Copying: 746/1024 [MB] (27 MBps) [2024-11-26T17:54:38.879Z] Copying: 773/1024 [MB] (27 MBps) [2024-11-26T17:54:39.816Z] Copying: 800/1024 [MB] (27 MBps) [2024-11-26T17:54:41.194Z] Copying: 828/1024 [MB] (27 MBps) [2024-11-26T17:54:41.814Z] Copying: 855/1024 [MB] (27 MBps) [2024-11-26T17:54:43.191Z] Copying: 883/1024 [MB] (28 MBps) [2024-11-26T17:54:43.760Z] Copying: 911/1024 [MB] (27 MBps) [2024-11-26T17:54:45.139Z] Copying: 938/1024 [MB] (27 MBps) [2024-11-26T17:54:46.078Z] Copying: 968/1024 [MB] (29 MBps) [2024-11-26T17:54:47.014Z] Copying: 996/1024 [MB] (28 MBps) [2024-11-26T17:54:47.014Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-26 17:54:46.815210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.815588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:57:46.320 [2024-11-26 17:54:46.815629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:57:46.320 [2024-11-26 17:54:46.815645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.815719] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:57:46.320 [2024-11-26 17:54:46.822000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.822059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:57:46.320 [2024-11-26 17:54:46.822077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:57:46.320 [2024-11-26 17:54:46.822090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.822375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.822391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:57:46.320 [2024-11-26 17:54:46.822405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:57:46.320 [2024-11-26 17:54:46.822419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.825977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.826006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:57:46.320 [2024-11-26 17:54:46.826020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:57:46.320 [2024-11-26 17:54:46.826039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.832079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.832224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:57:46.320 [2024-11-26 17:54:46.832246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.025 ms 00:57:46.320 [2024-11-26 17:54:46.832257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.870391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.870438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:57:46.320 [2024-11-26 17:54:46.870454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.102 ms 00:57:46.320 [2024-11-26 17:54:46.870481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.892147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.892194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:57:46.320 [2024-11-26 17:54:46.892212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.645 ms 00:57:46.320 [2024-11-26 17:54:46.892224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.894656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.894696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:57:46.320 [2024-11-26 17:54:46.894709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:57:46.320 [2024-11-26 17:54:46.894720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.932156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.932203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:57:46.320 [2024-11-26 17:54:46.932220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.475 ms 00:57:46.320 [2024-11-26 17:54:46.932231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:46.972234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:46.972304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:57:46.320 [2024-11-26 17:54:46.972323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.018 ms 00:57:46.320 [2024-11-26 17:54:46.972334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.320 [2024-11-26 17:54:47.008168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.320 [2024-11-26 17:54:47.008216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:57:46.320 [2024-11-26 17:54:47.008233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.827 ms 00:57:46.320 [2024-11-26 17:54:47.008244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.580 [2024-11-26 17:54:47.043991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.580 [2024-11-26 17:54:47.044037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:57:46.580 [2024-11-26 17:54:47.044052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.702 ms 00:57:46.580 [2024-11-26 17:54:47.044063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.580 [2024-11-26 17:54:47.044107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:57:46.580 [2024-11-26 17:54:47.044135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:57:46.580 [2024-11-26 17:54:47.044156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:57:46.580 [2024-11-26 17:54:47.044168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:57:46.580 [2024-11-26 17:54:47.044181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:57:46.580 [2024-11-26 17:54:47.044192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:57:46.580 [2024-11-26 17:54:47.044204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:57:46.580 [2024-11-26 17:54:47.044216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:57:46.580 [2024-11-26 17:54:47.044228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.044989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:57:46.581 [2024-11-26 17:54:47.045225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:57:46.582 [2024-11-26 17:54:47.045303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:57:46.582 [2024-11-26 17:54:47.045313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50c977a4-dc9e-442a-be46-ef9fda80b8fe 00:57:46.582 [2024-11-26 17:54:47.045325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:57:46.582 [2024-11-26 17:54:47.045336] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:57:46.582 [2024-11-26 17:54:47.045347] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:57:46.582 [2024-11-26 17:54:47.045358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:57:46.582 [2024-11-26 17:54:47.045383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:57:46.582 [2024-11-26 17:54:47.045394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:57:46.582 [2024-11-26 17:54:47.045404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:57:46.582 [2024-11-26 17:54:47.045414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:57:46.582 [2024-11-26 17:54:47.045424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:57:46.582 [2024-11-26 17:54:47.045435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.582 [2024-11-26 17:54:47.045446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:57:46.582 [2024-11-26 17:54:47.045457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:57:46.582 [2024-11-26 17:54:47.045473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.066236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.582 [2024-11-26 17:54:47.066405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:57:46.582 [2024-11-26 17:54:47.066427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.746 ms 00:57:46.582 [2024-11-26 17:54:47.066439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.067037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:57:46.582 [2024-11-26 17:54:47.067061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:57:46.582 [2024-11-26 17:54:47.067073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:57:46.582 [2024-11-26 17:54:47.067084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.121733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.582 [2024-11-26 17:54:47.121777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:57:46.582 [2024-11-26 17:54:47.121792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.582 [2024-11-26 17:54:47.121804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.121874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.582 [2024-11-26 17:54:47.121892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:57:46.582 [2024-11-26 17:54:47.121903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.582 [2024-11-26 17:54:47.121914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.122009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.582 [2024-11-26 17:54:47.122023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:57:46.582 [2024-11-26 17:54:47.122045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.582 [2024-11-26 17:54:47.122056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.122076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.582 [2024-11-26 17:54:47.122088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:57:46.582 [2024-11-26 17:54:47.122103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.582 [2024-11-26 17:54:47.122114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.582 [2024-11-26 17:54:47.263129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.582 [2024-11-26 17:54:47.263216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:57:46.582 [2024-11-26 17:54:47.263235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.582 [2024-11-26 17:54:47.263248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.841 [2024-11-26 17:54:47.371822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.841 [2024-11-26 17:54:47.371916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:57:46.841 [2024-11-26 17:54:47.371933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.841 [2024-11-26 17:54:47.371945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.841 [2024-11-26 17:54:47.372086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.841 [2024-11-26 17:54:47.372101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:57:46.841 [2024-11-26 17:54:47.372113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.841 [2024-11-26 17:54:47.372124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.841 [2024-11-26 17:54:47.372185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.841 [2024-11-26 17:54:47.372199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:57:46.841 [2024-11-26 17:54:47.372211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.841 [2024-11-26 17:54:47.372226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.841 [2024-11-26 17:54:47.372352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.841 [2024-11-26 17:54:47.372366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:57:46.841 [2024-11-26 17:54:47.372378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.841 [2024-11-26 17:54:47.372390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.842 [2024-11-26 17:54:47.372430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.842 [2024-11-26 17:54:47.372443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:57:46.842 [2024-11-26 17:54:47.372454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.842 [2024-11-26 17:54:47.372465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.842 [2024-11-26 17:54:47.372540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.842 [2024-11-26 17:54:47.372554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:57:46.842 [2024-11-26 17:54:47.372566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.842 [2024-11-26 17:54:47.372577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.842 [2024-11-26 17:54:47.372631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:57:46.842 [2024-11-26 17:54:47.372644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:57:46.842 [2024-11-26 17:54:47.372656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:57:46.842 [2024-11-26 17:54:47.372671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:57:46.842 [2024-11-26 17:54:47.372821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 558.479 ms, result 0 00:57:48.221 00:57:48.221 00:57:48.221 17:54:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:57:50.190 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:57:50.190 Process with pid 81267 is not found 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81267 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81267 ']' 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81267 00:57:50.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81267) - No such process 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81267 is not found' 00:57:50.190 17:54:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:57:50.449 Remove shared memory files 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:57:50.449 ************************************ 00:57:50.449 END TEST ftl_dirty_shutdown 00:57:50.449 ************************************ 00:57:50.449 00:57:50.449 real 3m35.435s 00:57:50.449 user 4m2.027s 00:57:50.449 sys 0m41.504s 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:50.449 17:54:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:50.708 17:54:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:57:50.708 17:54:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:50.708 17:54:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:50.708 17:54:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:57:50.708 ************************************ 00:57:50.708 START TEST ftl_upgrade_shutdown 00:57:50.709 ************************************ 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:57:50.709 * Looking for test storage... 00:57:50.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:57:50.709 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:57:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:50.970 --rc genhtml_branch_coverage=1 00:57:50.970 --rc genhtml_function_coverage=1 00:57:50.970 --rc genhtml_legend=1 00:57:50.970 --rc geninfo_all_blocks=1 00:57:50.970 --rc geninfo_unexecuted_blocks=1 00:57:50.970 00:57:50.970 ' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:57:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:50.970 --rc genhtml_branch_coverage=1 00:57:50.970 --rc genhtml_function_coverage=1 00:57:50.970 --rc genhtml_legend=1 00:57:50.970 --rc geninfo_all_blocks=1 00:57:50.970 --rc geninfo_unexecuted_blocks=1 00:57:50.970 00:57:50.970 ' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:57:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:50.970 --rc genhtml_branch_coverage=1 00:57:50.970 --rc genhtml_function_coverage=1 00:57:50.970 --rc genhtml_legend=1 00:57:50.970 --rc geninfo_all_blocks=1 00:57:50.970 --rc geninfo_unexecuted_blocks=1 00:57:50.970 00:57:50.970 ' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:57:50.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:50.970 --rc genhtml_branch_coverage=1 00:57:50.970 --rc genhtml_function_coverage=1 00:57:50.970 --rc genhtml_legend=1 00:57:50.970 --rc geninfo_all_blocks=1 00:57:50.970 --rc geninfo_unexecuted_blocks=1 00:57:50.970 00:57:50.970 ' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:57:50.970 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83534 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:57:50.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83534 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83534 ']' 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:50.971 17:54:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:57:50.971 [2024-11-26 17:54:51.574579] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:57:50.971 [2024-11-26 17:54:51.574729] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83534 ] 00:57:51.230 [2024-11-26 17:54:51.763994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:57:51.230 [2024-11-26 17:54:51.915894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:57:52.609 17:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:57:52.609 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:57:52.868 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:57:53.127 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:53.127 { 00:57:53.127 "name": "basen1", 00:57:53.127 "aliases": [ 00:57:53.127 "6a0f0ee9-423e-480d-bd17-d9aabb0cf0c0" 00:57:53.127 ], 00:57:53.127 "product_name": "NVMe disk", 00:57:53.127 "block_size": 4096, 00:57:53.127 "num_blocks": 1310720, 00:57:53.127 "uuid": "6a0f0ee9-423e-480d-bd17-d9aabb0cf0c0", 00:57:53.127 "numa_id": -1, 00:57:53.127 "assigned_rate_limits": { 00:57:53.127 "rw_ios_per_sec": 0, 00:57:53.127 "rw_mbytes_per_sec": 0, 00:57:53.127 "r_mbytes_per_sec": 0, 00:57:53.127 "w_mbytes_per_sec": 0 00:57:53.127 }, 00:57:53.127 "claimed": true, 00:57:53.127 "claim_type": "read_many_write_one", 00:57:53.127 "zoned": false, 00:57:53.127 "supported_io_types": { 00:57:53.127 "read": true, 00:57:53.127 "write": true, 00:57:53.127 "unmap": true, 00:57:53.127 "flush": true, 00:57:53.127 "reset": true, 00:57:53.127 "nvme_admin": true, 00:57:53.127 "nvme_io": true, 00:57:53.127 "nvme_io_md": false, 00:57:53.127 "write_zeroes": true, 00:57:53.127 "zcopy": false, 00:57:53.127 "get_zone_info": false, 00:57:53.127 "zone_management": false, 00:57:53.127 "zone_append": false, 00:57:53.127 "compare": true, 00:57:53.127 "compare_and_write": false, 00:57:53.127 "abort": true, 00:57:53.127 "seek_hole": false, 00:57:53.127 "seek_data": false, 00:57:53.127 "copy": true, 00:57:53.127 "nvme_iov_md": false 00:57:53.127 }, 00:57:53.127 "driver_specific": { 00:57:53.127 "nvme": [ 00:57:53.127 { 00:57:53.127 "pci_address": "0000:00:11.0", 00:57:53.127 "trid": { 00:57:53.127 "trtype": "PCIe", 00:57:53.127 "traddr": "0000:00:11.0" 00:57:53.127 }, 00:57:53.127 "ctrlr_data": { 00:57:53.127 "cntlid": 0, 00:57:53.127 "vendor_id": "0x1b36", 00:57:53.127 "model_number": "QEMU NVMe Ctrl", 00:57:53.127 "serial_number": "12341", 00:57:53.127 "firmware_revision": "8.0.0", 00:57:53.127 "subnqn": "nqn.2019-08.org.qemu:12341", 00:57:53.127 "oacs": { 00:57:53.127 "security": 0, 00:57:53.127 "format": 1, 00:57:53.128 "firmware": 0, 00:57:53.128 "ns_manage": 1 00:57:53.128 }, 00:57:53.128 "multi_ctrlr": false, 00:57:53.128 "ana_reporting": false 00:57:53.128 }, 00:57:53.128 "vs": { 00:57:53.128 "nvme_version": "1.4" 00:57:53.128 }, 00:57:53.128 "ns_data": { 00:57:53.128 "id": 1, 00:57:53.128 "can_share": false 00:57:53.128 } 00:57:53.128 } 00:57:53.128 ], 00:57:53.128 "mp_policy": "active_passive" 00:57:53.128 } 00:57:53.128 } 00:57:53.128 ]' 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:57:53.128 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:57:53.387 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=249ff458-f7da-4567-8bb6-3fedfa013c41 00:57:53.387 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:57:53.387 17:54:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 249ff458-f7da-4567-8bb6-3fedfa013c41 00:57:53.646 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:57:53.904 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=dbd50d15-f58f-4362-ad4e-deef4293ea2f 00:57:53.904 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u dbd50d15-f58f-4362-ad4e-deef4293ea2f 00:57:53.904 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=466372af-d045-4a83-9b02-1b1f0b96dc62 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 466372af-d045-4a83-9b02-1b1f0b96dc62 ]] 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 466372af-d045-4a83-9b02-1b1f0b96dc62 5120 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=466372af-d045-4a83-9b02-1b1f0b96dc62 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 466372af-d045-4a83-9b02-1b1f0b96dc62 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=466372af-d045-4a83-9b02-1b1f0b96dc62 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:57:53.905 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 466372af-d045-4a83-9b02-1b1f0b96dc62 00:57:54.164 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:54.164 { 00:57:54.164 "name": "466372af-d045-4a83-9b02-1b1f0b96dc62", 00:57:54.164 "aliases": [ 00:57:54.164 "lvs/basen1p0" 00:57:54.164 ], 00:57:54.164 "product_name": "Logical Volume", 00:57:54.164 "block_size": 4096, 00:57:54.164 "num_blocks": 5242880, 00:57:54.164 "uuid": "466372af-d045-4a83-9b02-1b1f0b96dc62", 00:57:54.164 "assigned_rate_limits": { 00:57:54.164 "rw_ios_per_sec": 0, 00:57:54.164 "rw_mbytes_per_sec": 0, 00:57:54.164 "r_mbytes_per_sec": 0, 00:57:54.164 "w_mbytes_per_sec": 0 00:57:54.164 }, 00:57:54.164 "claimed": false, 00:57:54.164 "zoned": false, 00:57:54.164 "supported_io_types": { 00:57:54.164 "read": true, 00:57:54.164 "write": true, 00:57:54.164 "unmap": true, 00:57:54.164 "flush": false, 00:57:54.164 "reset": true, 00:57:54.164 "nvme_admin": false, 00:57:54.164 "nvme_io": false, 00:57:54.164 "nvme_io_md": false, 00:57:54.164 "write_zeroes": true, 00:57:54.164 "zcopy": false, 00:57:54.164 "get_zone_info": false, 00:57:54.164 "zone_management": false, 00:57:54.164 "zone_append": false, 00:57:54.164 "compare": false, 00:57:54.164 "compare_and_write": false, 00:57:54.164 "abort": false, 00:57:54.164 "seek_hole": true, 00:57:54.164 "seek_data": true, 00:57:54.164 "copy": false, 00:57:54.164 "nvme_iov_md": false 00:57:54.164 }, 00:57:54.164 "driver_specific": { 00:57:54.164 "lvol": { 00:57:54.164 "lvol_store_uuid": "dbd50d15-f58f-4362-ad4e-deef4293ea2f", 00:57:54.164 "base_bdev": "basen1", 00:57:54.164 "thin_provision": true, 00:57:54.164 "num_allocated_clusters": 0, 00:57:54.164 "snapshot": false, 00:57:54.164 "clone": false, 00:57:54.164 "esnap_clone": false 00:57:54.164 } 00:57:54.164 } 00:57:54.164 } 00:57:54.164 ]' 00:57:54.164 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:54.164 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:57:54.164 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:57:54.424 17:54:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:57:54.684 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:57:54.684 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:57:54.684 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:57:54.944 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:57:54.944 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:57:54.944 17:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 466372af-d045-4a83-9b02-1b1f0b96dc62 -c cachen1p0 --l2p_dram_limit 2 00:57:54.944 [2024-11-26 17:54:55.599124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.599198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:57:54.944 [2024-11-26 17:54:55.599222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:57:54.944 [2024-11-26 17:54:55.599234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.599327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.599340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:57:54.944 [2024-11-26 17:54:55.599363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:57:54.944 [2024-11-26 17:54:55.599375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.599404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:57:54.944 [2024-11-26 17:54:55.600565] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:57:54.944 [2024-11-26 17:54:55.600606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.600618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:57:54.944 [2024-11-26 17:54:55.600633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.205 ms 00:57:54.944 [2024-11-26 17:54:55.600643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.600740] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f7bad5d5-01fa-4dfe-bb1f-4590f7e6f6ba 00:57:54.944 [2024-11-26 17:54:55.603179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.603349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:57:54.944 [2024-11-26 17:54:55.603382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:57:54.944 [2024-11-26 17:54:55.603397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.618126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.618319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:57:54.944 [2024-11-26 17:54:55.618345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.673 ms 00:57:54.944 [2024-11-26 17:54:55.618359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.618419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.618436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:57:54.944 [2024-11-26 17:54:55.618448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:57:54.944 [2024-11-26 17:54:55.618465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.618566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.618583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:57:54.944 [2024-11-26 17:54:55.618599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:57:54.944 [2024-11-26 17:54:55.618615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.618647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:57:54.944 [2024-11-26 17:54:55.625412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.625572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:57:54.944 [2024-11-26 17:54:55.625600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.784 ms 00:57:54.944 [2024-11-26 17:54:55.625611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.625651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.625662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:57:54.944 [2024-11-26 17:54:55.625677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:57:54.944 [2024-11-26 17:54:55.625687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.625726] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:57:54.944 [2024-11-26 17:54:55.625872] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:57:54.944 [2024-11-26 17:54:55.625894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:57:54.944 [2024-11-26 17:54:55.625908] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:57:54.944 [2024-11-26 17:54:55.625925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:57:54.944 [2024-11-26 17:54:55.625938] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:57:54.944 [2024-11-26 17:54:55.625953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:57:54.944 [2024-11-26 17:54:55.625967] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:57:54.944 [2024-11-26 17:54:55.625981] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:57:54.944 [2024-11-26 17:54:55.625991] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:57:54.944 [2024-11-26 17:54:55.626005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.626015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:57:54.944 [2024-11-26 17:54:55.626031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:57:54.944 [2024-11-26 17:54:55.626041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.944 [2024-11-26 17:54:55.626119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.944 [2024-11-26 17:54:55.626142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:57:54.945 [2024-11-26 17:54:55.626157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:57:54.945 [2024-11-26 17:54:55.626167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.945 [2024-11-26 17:54:55.626275] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:57:54.945 [2024-11-26 17:54:55.626289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:57:54.945 [2024-11-26 17:54:55.626304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:57:54.945 [2024-11-26 17:54:55.626340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:57:54.945 [2024-11-26 17:54:55.626363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:57:54.945 [2024-11-26 17:54:55.626375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:57:54.945 [2024-11-26 17:54:55.626384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:57:54.945 [2024-11-26 17:54:55.626406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:57:54.945 [2024-11-26 17:54:55.626419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:57:54.945 [2024-11-26 17:54:55.626440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:57:54.945 [2024-11-26 17:54:55.626450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:57:54.945 [2024-11-26 17:54:55.626477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:57:54.945 [2024-11-26 17:54:55.626489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:57:54.945 [2024-11-26 17:54:55.626525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:57:54.945 [2024-11-26 17:54:55.626557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:57:54.945 [2024-11-26 17:54:55.626590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:57:54.945 [2024-11-26 17:54:55.626620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:57:54.945 [2024-11-26 17:54:55.626657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:57:54.945 [2024-11-26 17:54:55.626686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:57:54.945 [2024-11-26 17:54:55.626719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:57:54.945 [2024-11-26 17:54:55.626749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:57:54.945 [2024-11-26 17:54:55.626763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626771] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:57:54.945 [2024-11-26 17:54:55.626785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:57:54.945 [2024-11-26 17:54:55.626795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:57:54.945 [2024-11-26 17:54:55.626821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:57:54.945 [2024-11-26 17:54:55.626836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:57:54.945 [2024-11-26 17:54:55.626845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:57:54.945 [2024-11-26 17:54:55.626858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:57:54.945 [2024-11-26 17:54:55.626867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:57:54.945 [2024-11-26 17:54:55.626880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:57:54.945 [2024-11-26 17:54:55.626895] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:57:54.945 [2024-11-26 17:54:55.626914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.626926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:57:54.945 [2024-11-26 17:54:55.626940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.626951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.626964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:57:54.945 [2024-11-26 17:54:55.626974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:57:54.945 [2024-11-26 17:54:55.626987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:57:54.945 [2024-11-26 17:54:55.626998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:57:54.945 [2024-11-26 17:54:55.627011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:57:54.945 [2024-11-26 17:54:55.627097] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:57:54.945 [2024-11-26 17:54:55.627111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:57:54.945 [2024-11-26 17:54:55.627136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:57:54.945 [2024-11-26 17:54:55.627146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:57:54.945 [2024-11-26 17:54:55.627159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:57:54.945 [2024-11-26 17:54:55.627169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:54.945 [2024-11-26 17:54:55.627183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:57:54.945 [2024-11-26 17:54:55.627193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.959 ms 00:57:54.945 [2024-11-26 17:54:55.627206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:54.945 [2024-11-26 17:54:55.627256] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:57:54.945 [2024-11-26 17:54:55.627276] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:57:59.184 [2024-11-26 17:54:59.131672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.131978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:57:59.184 [2024-11-26 17:54:59.132012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3510.101 ms 00:57:59.184 [2024-11-26 17:54:59.132029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.180163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.180248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:57:59.184 [2024-11-26 17:54:59.180269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.889 ms 00:57:59.184 [2024-11-26 17:54:59.180284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.180443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.180462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:57:59.184 [2024-11-26 17:54:59.180474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:57:59.184 [2024-11-26 17:54:59.180516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.235134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.235214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:57:59.184 [2024-11-26 17:54:59.235233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.653 ms 00:57:59.184 [2024-11-26 17:54:59.235247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.235329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.235345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:57:59.184 [2024-11-26 17:54:59.235366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:57:59.184 [2024-11-26 17:54:59.235381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.236297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.236326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:57:59.184 [2024-11-26 17:54:59.236351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:57:59.184 [2024-11-26 17:54:59.236365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.236416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.236435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:57:59.184 [2024-11-26 17:54:59.236446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:57:59.184 [2024-11-26 17:54:59.236462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.184 [2024-11-26 17:54:59.262347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.184 [2024-11-26 17:54:59.262617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:57:59.185 [2024-11-26 17:54:59.262648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.901 ms 00:57:59.185 [2024-11-26 17:54:59.262663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.290423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:57:59.185 [2024-11-26 17:54:59.292377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.292413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:57:59.185 [2024-11-26 17:54:59.292438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.602 ms 00:57:59.185 [2024-11-26 17:54:59.292452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.327870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.328130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:57:59.185 [2024-11-26 17:54:59.328167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.389 ms 00:57:59.185 [2024-11-26 17:54:59.328180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.328313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.328326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:57:59.185 [2024-11-26 17:54:59.328346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:57:59.185 [2024-11-26 17:54:59.328357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.364931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.364997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:57:59.185 [2024-11-26 17:54:59.365019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.559 ms 00:57:59.185 [2024-11-26 17:54:59.365030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.401945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.402002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:57:59.185 [2024-11-26 17:54:59.402023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.908 ms 00:57:59.185 [2024-11-26 17:54:59.402034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.402813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.402833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:57:59.185 [2024-11-26 17:54:59.402854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.729 ms 00:57:59.185 [2024-11-26 17:54:59.402865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.510023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.510098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:57:59.185 [2024-11-26 17:54:59.510127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 107.253 ms 00:57:59.185 [2024-11-26 17:54:59.510139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.550402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.550471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:57:59.185 [2024-11-26 17:54:59.550505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.198 ms 00:57:59.185 [2024-11-26 17:54:59.550519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.590162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.590222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:57:59.185 [2024-11-26 17:54:59.590243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.647 ms 00:57:59.185 [2024-11-26 17:54:59.590255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.627582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.627778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:57:59.185 [2024-11-26 17:54:59.627810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.332 ms 00:57:59.185 [2024-11-26 17:54:59.627822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.627879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.627891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:57:59.185 [2024-11-26 17:54:59.627912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:57:59.185 [2024-11-26 17:54:59.627923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.628049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:57:59.185 [2024-11-26 17:54:59.628067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:57:59.185 [2024-11-26 17:54:59.628081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:57:59.185 [2024-11-26 17:54:59.628092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:57:59.185 [2024-11-26 17:54:59.629493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4036.398 ms, result 0 00:57:59.185 { 00:57:59.185 "name": "ftl", 00:57:59.185 "uuid": "f7bad5d5-01fa-4dfe-bb1f-4590f7e6f6ba" 00:57:59.185 } 00:57:59.185 17:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:57:59.185 [2024-11-26 17:54:59.863965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:59.444 17:54:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:57:59.444 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:57:59.704 [2024-11-26 17:55:00.307864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:57:59.704 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:57:59.962 [2024-11-26 17:55:00.526697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:57:59.962 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:58:00.220 Fill FTL, iteration 1 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:58:00.220 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83666 00:58:00.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83666 /var/tmp/spdk.tgt.sock 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83666 ']' 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:00.221 17:55:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:58:00.478 [2024-11-26 17:55:01.013554] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:00.478 [2024-11-26 17:55:01.013887] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83666 ] 00:58:00.736 [2024-11-26 17:55:01.199937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:00.736 [2024-11-26 17:55:01.344866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:02.111 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:02.111 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:58:02.112 17:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:58:02.112 ftln1 00:58:02.112 17:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:58:02.112 17:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83666 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83666 ']' 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83666 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83666 00:58:02.371 killing process with pid 83666 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83666' 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83666 00:58:02.371 17:55:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83666 00:58:04.908 17:55:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:58:04.908 17:55:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:58:05.167 [2024-11-26 17:55:05.641655] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:05.167 [2024-11-26 17:55:05.641810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83726 ] 00:58:05.167 [2024-11-26 17:55:05.827668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:05.444 [2024-11-26 17:55:05.978310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:06.836  [2024-11-26T17:55:08.909Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-26T17:55:09.845Z] Copying: 492/1024 [MB] (245 MBps) [2024-11-26T17:55:10.780Z] Copying: 741/1024 [MB] (249 MBps) [2024-11-26T17:55:10.780Z] Copying: 982/1024 [MB] (241 MBps) [2024-11-26T17:55:12.157Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:58:11.463 00:58:11.463 Calculate MD5 checksum, iteration 1 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:58:11.463 17:55:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:58:11.463 [2024-11-26 17:55:12.119713] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:11.463 [2024-11-26 17:55:12.120055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83791 ] 00:58:11.722 [2024-11-26 17:55:12.299734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:12.011 [2024-11-26 17:55:12.445097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:13.443  [2024-11-26T17:55:14.707Z] Copying: 685/1024 [MB] (685 MBps) [2024-11-26T17:55:15.684Z] Copying: 1024/1024 [MB] (average 678 MBps) 00:58:14.990 00:58:14.990 17:55:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:58:14.990 17:55:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:58:16.896 Fill FTL, iteration 2 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c98b70bf945913f9611f024c27d0fe5c 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:58:16.896 17:55:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:58:16.896 [2024-11-26 17:55:17.423104] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:16.896 [2024-11-26 17:55:17.423408] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83854 ] 00:58:17.154 [2024-11-26 17:55:17.611915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:17.154 [2024-11-26 17:55:17.760873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:19.060  [2024-11-26T17:55:20.324Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-26T17:55:21.703Z] Copying: 477/1024 [MB] (230 MBps) [2024-11-26T17:55:22.641Z] Copying: 718/1024 [MB] (241 MBps) [2024-11-26T17:55:22.641Z] Copying: 957/1024 [MB] (239 MBps) [2024-11-26T17:55:24.022Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:58:23.328 00:58:23.328 Calculate MD5 checksum, iteration 2 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:58:23.328 17:55:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:58:23.328 [2024-11-26 17:55:23.982597] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:23.328 [2024-11-26 17:55:23.982897] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83924 ] 00:58:23.629 [2024-11-26 17:55:24.170429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:23.629 [2024-11-26 17:55:24.314767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:25.537  [2024-11-26T17:55:26.798Z] Copying: 685/1024 [MB] (685 MBps) [2024-11-26T17:55:28.176Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:58:27.482 00:58:27.482 17:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:58:27.482 17:55:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:58:29.382 17:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:58:29.382 17:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=52dc2180db3dd129ca9c7309e8bd0a99 00:58:29.382 17:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:58:29.382 17:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:58:29.382 17:55:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:58:29.382 [2024-11-26 17:55:30.008807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.382 [2024-11-26 17:55:30.008869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:58:29.382 [2024-11-26 17:55:30.008888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:58:29.382 [2024-11-26 17:55:30.008902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.382 [2024-11-26 17:55:30.008934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.382 [2024-11-26 17:55:30.008953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:58:29.382 [2024-11-26 17:55:30.008965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:58:29.382 [2024-11-26 17:55:30.008977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.382 [2024-11-26 17:55:30.009005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.382 [2024-11-26 17:55:30.009017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:58:29.382 [2024-11-26 17:55:30.009028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:58:29.382 [2024-11-26 17:55:30.009039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.382 [2024-11-26 17:55:30.009130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.310 ms, result 0 00:58:29.382 true 00:58:29.382 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:29.641 { 00:58:29.641 "name": "ftl", 00:58:29.641 "properties": [ 00:58:29.641 { 00:58:29.641 "name": "superblock_version", 00:58:29.641 "value": 5, 00:58:29.641 "read-only": true 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "name": "base_device", 00:58:29.641 "bands": [ 00:58:29.641 { 00:58:29.641 "id": 0, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 1, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 2, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 3, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 4, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 5, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 6, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 7, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 8, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 9, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 10, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 11, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 12, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 13, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 14, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 15, 00:58:29.641 "state": "FREE", 00:58:29.641 "validity": 0.0 00:58:29.641 }, 00:58:29.641 { 00:58:29.641 "id": 16, 00:58:29.642 "state": "FREE", 00:58:29.642 "validity": 0.0 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "id": 17, 00:58:29.642 "state": "FREE", 00:58:29.642 "validity": 0.0 00:58:29.642 } 00:58:29.642 ], 00:58:29.642 "read-only": true 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "name": "cache_device", 00:58:29.642 "type": "bdev", 00:58:29.642 "chunks": [ 00:58:29.642 { 00:58:29.642 "id": 0, 00:58:29.642 "state": "INACTIVE", 00:58:29.642 "utilization": 0.0 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "id": 1, 00:58:29.642 "state": "CLOSED", 00:58:29.642 "utilization": 1.0 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "id": 2, 00:58:29.642 "state": "CLOSED", 00:58:29.642 "utilization": 1.0 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "id": 3, 00:58:29.642 "state": "OPEN", 00:58:29.642 "utilization": 0.001953125 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "id": 4, 00:58:29.642 "state": "OPEN", 00:58:29.642 "utilization": 0.0 00:58:29.642 } 00:58:29.642 ], 00:58:29.642 "read-only": true 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "name": "verbose_mode", 00:58:29.642 "value": true, 00:58:29.642 "unit": "", 00:58:29.642 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:58:29.642 }, 00:58:29.642 { 00:58:29.642 "name": "prep_upgrade_on_shutdown", 00:58:29.642 "value": false, 00:58:29.642 "unit": "", 00:58:29.642 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:58:29.642 } 00:58:29.642 ] 00:58:29.642 } 00:58:29.642 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:58:29.900 [2024-11-26 17:55:30.444518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.901 [2024-11-26 17:55:30.444592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:58:29.901 [2024-11-26 17:55:30.444612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:58:29.901 [2024-11-26 17:55:30.444624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.901 [2024-11-26 17:55:30.444659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.901 [2024-11-26 17:55:30.444672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:58:29.901 [2024-11-26 17:55:30.444683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:58:29.901 [2024-11-26 17:55:30.444694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.901 [2024-11-26 17:55:30.444716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:29.901 [2024-11-26 17:55:30.444727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:58:29.901 [2024-11-26 17:55:30.444738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:58:29.901 [2024-11-26 17:55:30.444749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:29.901 [2024-11-26 17:55:30.444819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:58:29.901 true 00:58:29.901 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:58:29.901 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:58:29.901 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:30.159 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:58:30.159 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:58:30.159 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:58:30.418 [2024-11-26 17:55:30.893462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:30.418 [2024-11-26 17:55:30.893571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:58:30.418 [2024-11-26 17:55:30.893591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:58:30.418 [2024-11-26 17:55:30.893603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:30.418 [2024-11-26 17:55:30.893631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:30.418 [2024-11-26 17:55:30.893644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:58:30.418 [2024-11-26 17:55:30.893656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:58:30.418 [2024-11-26 17:55:30.893667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:30.418 [2024-11-26 17:55:30.893688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:30.418 [2024-11-26 17:55:30.893700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:58:30.418 [2024-11-26 17:55:30.893711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:58:30.418 [2024-11-26 17:55:30.893723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:30.418 [2024-11-26 17:55:30.893794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.324 ms, result 0 00:58:30.418 true 00:58:30.418 17:55:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:30.418 { 00:58:30.418 "name": "ftl", 00:58:30.418 "properties": [ 00:58:30.418 { 00:58:30.418 "name": "superblock_version", 00:58:30.418 "value": 5, 00:58:30.418 "read-only": true 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "name": "base_device", 00:58:30.418 "bands": [ 00:58:30.418 { 00:58:30.418 "id": 0, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 1, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 2, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 3, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 4, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 5, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 6, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 7, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 8, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 9, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 10, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 11, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 12, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 13, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 14, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 15, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 16, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 17, 00:58:30.418 "state": "FREE", 00:58:30.418 "validity": 0.0 00:58:30.418 } 00:58:30.418 ], 00:58:30.418 "read-only": true 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "name": "cache_device", 00:58:30.418 "type": "bdev", 00:58:30.418 "chunks": [ 00:58:30.418 { 00:58:30.418 "id": 0, 00:58:30.418 "state": "INACTIVE", 00:58:30.418 "utilization": 0.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 1, 00:58:30.418 "state": "CLOSED", 00:58:30.418 "utilization": 1.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 2, 00:58:30.418 "state": "CLOSED", 00:58:30.418 "utilization": 1.0 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 3, 00:58:30.418 "state": "OPEN", 00:58:30.418 "utilization": 0.001953125 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "id": 4, 00:58:30.418 "state": "OPEN", 00:58:30.418 "utilization": 0.0 00:58:30.418 } 00:58:30.418 ], 00:58:30.418 "read-only": true 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "name": "verbose_mode", 00:58:30.418 "value": true, 00:58:30.418 "unit": "", 00:58:30.418 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:58:30.418 }, 00:58:30.418 { 00:58:30.418 "name": "prep_upgrade_on_shutdown", 00:58:30.418 "value": true, 00:58:30.418 "unit": "", 00:58:30.418 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:58:30.418 } 00:58:30.418 ] 00:58:30.418 } 00:58:30.418 17:55:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:58:30.418 17:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83534 ]] 00:58:30.418 17:55:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83534 00:58:30.418 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83534 ']' 00:58:30.419 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83534 00:58:30.419 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83534 00:58:30.677 killing process with pid 83534 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83534' 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83534 00:58:30.677 17:55:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83534 00:58:32.094 [2024-11-26 17:55:32.404589] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:58:32.094 [2024-11-26 17:55:32.426081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:32.094 [2024-11-26 17:55:32.426129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:58:32.094 [2024-11-26 17:55:32.426147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:58:32.094 [2024-11-26 17:55:32.426158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:32.094 [2024-11-26 17:55:32.426183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:58:32.094 [2024-11-26 17:55:32.431223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:32.094 [2024-11-26 17:55:32.431254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:58:32.094 [2024-11-26 17:55:32.431267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.030 ms 00:58:32.094 [2024-11-26 17:55:32.431289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.550261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.550539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:58:40.222 [2024-11-26 17:55:39.550578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7130.496 ms 00:58:40.222 [2024-11-26 17:55:39.550591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.551670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.551697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:58:40.222 [2024-11-26 17:55:39.551711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.055 ms 00:58:40.222 [2024-11-26 17:55:39.551723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.552663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.552686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:58:40.222 [2024-11-26 17:55:39.552699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:58:40.222 [2024-11-26 17:55:39.552718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.568742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.568780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:58:40.222 [2024-11-26 17:55:39.568795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.993 ms 00:58:40.222 [2024-11-26 17:55:39.568806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.578093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.578235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:58:40.222 [2024-11-26 17:55:39.578258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.266 ms 00:58:40.222 [2024-11-26 17:55:39.578270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.578370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.578391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:58:40.222 [2024-11-26 17:55:39.578403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:58:40.222 [2024-11-26 17:55:39.578414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.593091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.593222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:58:40.222 [2024-11-26 17:55:39.593242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.683 ms 00:58:40.222 [2024-11-26 17:55:39.593253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.608070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.608201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:58:40.222 [2024-11-26 17:55:39.608221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.805 ms 00:58:40.222 [2024-11-26 17:55:39.608231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.622514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.622547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:58:40.222 [2024-11-26 17:55:39.622560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.269 ms 00:58:40.222 [2024-11-26 17:55:39.622570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.637066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.637207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:58:40.222 [2024-11-26 17:55:39.637226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.443 ms 00:58:40.222 [2024-11-26 17:55:39.637237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.637306] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:58:40.222 [2024-11-26 17:55:39.637337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:58:40.222 [2024-11-26 17:55:39.637351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:58:40.222 [2024-11-26 17:55:39.637362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:58:40.222 [2024-11-26 17:55:39.637374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:58:40.222 [2024-11-26 17:55:39.637554] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:58:40.222 [2024-11-26 17:55:39.637565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f7bad5d5-01fa-4dfe-bb1f-4590f7e6f6ba 00:58:40.222 [2024-11-26 17:55:39.637577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:58:40.222 [2024-11-26 17:55:39.637587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:58:40.222 [2024-11-26 17:55:39.637598] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:58:40.222 [2024-11-26 17:55:39.637618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:58:40.222 [2024-11-26 17:55:39.637634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:58:40.222 [2024-11-26 17:55:39.637645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:58:40.222 [2024-11-26 17:55:39.637661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:58:40.222 [2024-11-26 17:55:39.637670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:58:40.222 [2024-11-26 17:55:39.637679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:58:40.222 [2024-11-26 17:55:39.637689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.637704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:58:40.222 [2024-11-26 17:55:39.637715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:58:40.222 [2024-11-26 17:55:39.637726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.658972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.659006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:58:40.222 [2024-11-26 17:55:39.659026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.248 ms 00:58:40.222 [2024-11-26 17:55:39.659038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.659697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:40.222 [2024-11-26 17:55:39.659711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:58:40.222 [2024-11-26 17:55:39.659722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:58:40.222 [2024-11-26 17:55:39.659733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.732219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.732312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:58:40.222 [2024-11-26 17:55:39.732332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.732344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.732417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.732430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:58:40.222 [2024-11-26 17:55:39.732441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.732452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.732608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.732626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:58:40.222 [2024-11-26 17:55:39.732643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.732654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.732676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.732688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:58:40.222 [2024-11-26 17:55:39.732698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.732710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.872744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.872830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:58:40.222 [2024-11-26 17:55:39.872856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.872868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.979450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.979784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:58:40.222 [2024-11-26 17:55:39.979816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.979830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:58:40.222 [2024-11-26 17:55:39.980047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:58:40.222 [2024-11-26 17:55:39.980157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:58:40.222 [2024-11-26 17:55:39.980336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:58:40.222 [2024-11-26 17:55:39.980424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:58:40.222 [2024-11-26 17:55:39.980525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:58:40.222 [2024-11-26 17:55:39.980614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:58:40.222 [2024-11-26 17:55:39.980626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:58:40.222 [2024-11-26 17:55:39.980638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:40.222 [2024-11-26 17:55:39.980799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7566.939 ms, result 0 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84118 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84118 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84118 ']' 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:42.760 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:42.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:42.761 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:42.761 17:55:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:58:43.019 [2024-11-26 17:55:43.549206] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:43.020 [2024-11-26 17:55:43.549569] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84118 ] 00:58:43.278 [2024-11-26 17:55:43.738637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:43.279 [2024-11-26 17:55:43.895816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:44.660 [2024-11-26 17:55:45.046190] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:58:44.660 [2024-11-26 17:55:45.046463] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:58:44.660 [2024-11-26 17:55:45.194440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.194489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:58:44.660 [2024-11-26 17:55:45.194535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:58:44.660 [2024-11-26 17:55:45.194547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.194617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.194631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:58:44.660 [2024-11-26 17:55:45.194643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:58:44.660 [2024-11-26 17:55:45.194654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.194678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:58:44.660 [2024-11-26 17:55:45.195618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:58:44.660 [2024-11-26 17:55:45.195769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.195786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:58:44.660 [2024-11-26 17:55:45.195798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.095 ms 00:58:44.660 [2024-11-26 17:55:45.195808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.198307] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:58:44.660 [2024-11-26 17:55:45.218884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.218930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:58:44.660 [2024-11-26 17:55:45.218945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.611 ms 00:58:44.660 [2024-11-26 17:55:45.218956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.219024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.219037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:58:44.660 [2024-11-26 17:55:45.219049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:58:44.660 [2024-11-26 17:55:45.219060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.231817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.231850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:58:44.660 [2024-11-26 17:55:45.231863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.695 ms 00:58:44.660 [2024-11-26 17:55:45.231874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.232044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.232059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:58:44.660 [2024-11-26 17:55:45.232071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.147 ms 00:58:44.660 [2024-11-26 17:55:45.232081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.232145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.232162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:58:44.660 [2024-11-26 17:55:45.232173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:58:44.660 [2024-11-26 17:55:45.232184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.232213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:58:44.660 [2024-11-26 17:55:45.238687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.238720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:58:44.660 [2024-11-26 17:55:45.238738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.492 ms 00:58:44.660 [2024-11-26 17:55:45.238748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.238795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.238806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:58:44.660 [2024-11-26 17:55:45.238817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:58:44.660 [2024-11-26 17:55:45.238827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.238869] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:58:44.660 [2024-11-26 17:55:45.238900] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:58:44.660 [2024-11-26 17:55:45.238940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:58:44.660 [2024-11-26 17:55:45.238960] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:58:44.660 [2024-11-26 17:55:45.239055] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:58:44.660 [2024-11-26 17:55:45.239070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:58:44.660 [2024-11-26 17:55:45.239083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:58:44.660 [2024-11-26 17:55:45.239097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239126] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:58:44.660 [2024-11-26 17:55:45.239137] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:58:44.660 [2024-11-26 17:55:45.239148] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:58:44.660 [2024-11-26 17:55:45.239159] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:58:44.660 [2024-11-26 17:55:45.239170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.239180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:58:44.660 [2024-11-26 17:55:45.239191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:58:44.660 [2024-11-26 17:55:45.239201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.239275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.660 [2024-11-26 17:55:45.239286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:58:44.660 [2024-11-26 17:55:45.239301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:58:44.660 [2024-11-26 17:55:45.239312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.660 [2024-11-26 17:55:45.239416] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:58:44.660 [2024-11-26 17:55:45.239430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:58:44.660 [2024-11-26 17:55:45.239442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:58:44.660 [2024-11-26 17:55:45.239473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:58:44.660 [2024-11-26 17:55:45.239508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:58:44.660 [2024-11-26 17:55:45.239520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:58:44.660 [2024-11-26 17:55:45.239530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:58:44.660 [2024-11-26 17:55:45.239554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:58:44.660 [2024-11-26 17:55:45.239564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:58:44.660 [2024-11-26 17:55:45.239584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:58:44.660 [2024-11-26 17:55:45.239593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:58:44.660 [2024-11-26 17:55:45.239628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:58:44.660 [2024-11-26 17:55:45.239638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.660 [2024-11-26 17:55:45.239647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:58:44.660 [2024-11-26 17:55:45.239657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:58:44.660 [2024-11-26 17:55:45.239666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:58:44.660 [2024-11-26 17:55:45.239698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:58:44.660 [2024-11-26 17:55:45.239709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:58:44.660 [2024-11-26 17:55:45.239728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:58:44.660 [2024-11-26 17:55:45.239737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:58:44.660 [2024-11-26 17:55:45.239746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:58:44.660 [2024-11-26 17:55:45.239756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:58:44.661 [2024-11-26 17:55:45.239766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:58:44.661 [2024-11-26 17:55:45.239784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:58:44.661 [2024-11-26 17:55:45.239794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:58:44.661 [2024-11-26 17:55:45.239803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:58:44.661 [2024-11-26 17:55:45.239821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:58:44.661 [2024-11-26 17:55:45.239831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:58:44.661 [2024-11-26 17:55:45.239850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:58:44.661 [2024-11-26 17:55:45.239879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:58:44.661 [2024-11-26 17:55:45.239891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:58:44.661 [2024-11-26 17:55:45.239911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:58:44.661 [2024-11-26 17:55:45.239922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:58:44.661 [2024-11-26 17:55:45.239938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:58:44.661 [2024-11-26 17:55:45.239949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:58:44.661 [2024-11-26 17:55:45.239959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:58:44.661 [2024-11-26 17:55:45.239968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:58:44.661 [2024-11-26 17:55:45.239978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:58:44.661 [2024-11-26 17:55:45.239987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:58:44.661 [2024-11-26 17:55:45.239997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:58:44.661 [2024-11-26 17:55:45.240008] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:58:44.661 [2024-11-26 17:55:45.240021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:58:44.661 [2024-11-26 17:55:45.240044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:58:44.661 [2024-11-26 17:55:45.240076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:58:44.661 [2024-11-26 17:55:45.240087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:58:44.661 [2024-11-26 17:55:45.240097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:58:44.661 [2024-11-26 17:55:45.240107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:58:44.661 [2024-11-26 17:55:45.240182] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:58:44.661 [2024-11-26 17:55:45.240196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:58:44.661 [2024-11-26 17:55:45.240218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:58:44.661 [2024-11-26 17:55:45.240228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:58:44.661 [2024-11-26 17:55:45.240239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:58:44.661 [2024-11-26 17:55:45.240250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:44.661 [2024-11-26 17:55:45.240262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:58:44.661 [2024-11-26 17:55:45.240272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.891 ms 00:58:44.661 [2024-11-26 17:55:45.240282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:44.661 [2024-11-26 17:55:45.240333] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:58:44.661 [2024-11-26 17:55:45.240350] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:58:47.951 [2024-11-26 17:55:48.362872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.362957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:58:47.951 [2024-11-26 17:55:48.362979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3127.605 ms 00:58:47.951 [2024-11-26 17:55:48.362991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.411622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.411690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:58:47.951 [2024-11-26 17:55:48.411709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.360 ms 00:58:47.951 [2024-11-26 17:55:48.411722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.411868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.411882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:58:47.951 [2024-11-26 17:55:48.411895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:58:47.951 [2024-11-26 17:55:48.411906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.467381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.467433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:58:47.951 [2024-11-26 17:55:48.467455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.485 ms 00:58:47.951 [2024-11-26 17:55:48.467467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.467551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.467578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:58:47.951 [2024-11-26 17:55:48.467590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:58:47.951 [2024-11-26 17:55:48.467601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.468434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.468455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:58:47.951 [2024-11-26 17:55:48.468468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:58:47.951 [2024-11-26 17:55:48.468484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.468547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.468559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:58:47.951 [2024-11-26 17:55:48.468571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:58:47.951 [2024-11-26 17:55:48.468582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.495879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.495929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:58:47.951 [2024-11-26 17:55:48.495946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.315 ms 00:58:47.951 [2024-11-26 17:55:48.495974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.530382] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:58:47.951 [2024-11-26 17:55:48.530428] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:58:47.951 [2024-11-26 17:55:48.530446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.530458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:58:47.951 [2024-11-26 17:55:48.530472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.348 ms 00:58:47.951 [2024-11-26 17:55:48.530484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.550863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.550902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:58:47.951 [2024-11-26 17:55:48.550917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.340 ms 00:58:47.951 [2024-11-26 17:55:48.550945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.568996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.569147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:58:47.951 [2024-11-26 17:55:48.569170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.028 ms 00:58:47.951 [2024-11-26 17:55:48.569180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.587405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.587456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:58:47.951 [2024-11-26 17:55:48.587471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.153 ms 00:58:47.951 [2024-11-26 17:55:48.587482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:47.951 [2024-11-26 17:55:48.588261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:47.951 [2024-11-26 17:55:48.588295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:58:47.951 [2024-11-26 17:55:48.588308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.643 ms 00:58:47.951 [2024-11-26 17:55:48.588319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.689935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.690001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:58:48.212 [2024-11-26 17:55:48.690021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 101.753 ms 00:58:48.212 [2024-11-26 17:55:48.690033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.701641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:58:48.212 [2024-11-26 17:55:48.703011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.703041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:58:48.212 [2024-11-26 17:55:48.703056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.928 ms 00:58:48.212 [2024-11-26 17:55:48.703068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.703188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.703206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:58:48.212 [2024-11-26 17:55:48.703219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:58:48.212 [2024-11-26 17:55:48.703230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.703303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.703316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:58:48.212 [2024-11-26 17:55:48.703328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:58:48.212 [2024-11-26 17:55:48.703339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.703378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.703390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:58:48.212 [2024-11-26 17:55:48.703405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:58:48.212 [2024-11-26 17:55:48.703416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.703457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:58:48.212 [2024-11-26 17:55:48.703471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.703482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:58:48.212 [2024-11-26 17:55:48.703492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:58:48.212 [2024-11-26 17:55:48.703519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.740648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.740812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:58:48.212 [2024-11-26 17:55:48.740835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.159 ms 00:58:48.212 [2024-11-26 17:55:48.740847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.741014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.212 [2024-11-26 17:55:48.741030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:58:48.212 [2024-11-26 17:55:48.741042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:58:48.212 [2024-11-26 17:55:48.741053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.212 [2024-11-26 17:55:48.742640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3553.401 ms, result 0 00:58:48.212 [2024-11-26 17:55:48.757218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:48.212 [2024-11-26 17:55:48.773184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:58:48.212 [2024-11-26 17:55:48.782608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:58:48.212 17:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:48.212 17:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:58:48.212 17:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:58:48.212 17:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:58:48.212 17:55:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:58:48.471 [2024-11-26 17:55:49.038177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.471 [2024-11-26 17:55:49.038388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:58:48.471 [2024-11-26 17:55:49.038488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:58:48.471 [2024-11-26 17:55:49.038538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.471 [2024-11-26 17:55:49.038610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.471 [2024-11-26 17:55:49.038644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:58:48.471 [2024-11-26 17:55:49.038676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:58:48.471 [2024-11-26 17:55:49.038706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.472 [2024-11-26 17:55:49.038748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:58:48.472 [2024-11-26 17:55:49.038834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:58:48.472 [2024-11-26 17:55:49.038872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:58:48.472 [2024-11-26 17:55:49.038903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:58:48.472 [2024-11-26 17:55:49.039014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.821 ms, result 0 00:58:48.472 true 00:58:48.472 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:48.731 { 00:58:48.731 "name": "ftl", 00:58:48.731 "properties": [ 00:58:48.731 { 00:58:48.731 "name": "superblock_version", 00:58:48.731 "value": 5, 00:58:48.731 "read-only": true 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "name": "base_device", 00:58:48.731 "bands": [ 00:58:48.731 { 00:58:48.731 "id": 0, 00:58:48.731 "state": "CLOSED", 00:58:48.731 "validity": 1.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 1, 00:58:48.731 "state": "CLOSED", 00:58:48.731 "validity": 1.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 2, 00:58:48.731 "state": "CLOSED", 00:58:48.731 "validity": 0.007843137254901933 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 3, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 4, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 5, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 6, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 7, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 8, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 9, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 10, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 11, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 12, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 13, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 14, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 15, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 16, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 17, 00:58:48.731 "state": "FREE", 00:58:48.731 "validity": 0.0 00:58:48.731 } 00:58:48.731 ], 00:58:48.731 "read-only": true 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "name": "cache_device", 00:58:48.731 "type": "bdev", 00:58:48.731 "chunks": [ 00:58:48.731 { 00:58:48.731 "id": 0, 00:58:48.731 "state": "INACTIVE", 00:58:48.731 "utilization": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 1, 00:58:48.731 "state": "OPEN", 00:58:48.731 "utilization": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 2, 00:58:48.731 "state": "OPEN", 00:58:48.731 "utilization": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 3, 00:58:48.731 "state": "FREE", 00:58:48.731 "utilization": 0.0 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "id": 4, 00:58:48.731 "state": "FREE", 00:58:48.731 "utilization": 0.0 00:58:48.731 } 00:58:48.731 ], 00:58:48.731 "read-only": true 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "name": "verbose_mode", 00:58:48.731 "value": true, 00:58:48.731 "unit": "", 00:58:48.731 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:58:48.731 }, 00:58:48.731 { 00:58:48.731 "name": "prep_upgrade_on_shutdown", 00:58:48.731 "value": false, 00:58:48.731 "unit": "", 00:58:48.731 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:58:48.731 } 00:58:48.731 ] 00:58:48.731 } 00:58:48.731 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:58:48.731 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:58:48.732 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:48.991 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:58:48.991 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:58:48.991 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:58:48.991 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:58:48.991 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:58:49.251 Validate MD5 checksum, iteration 1 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:58:49.251 17:55:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:58:49.251 [2024-11-26 17:55:49.801683] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:49.251 [2024-11-26 17:55:49.802011] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84196 ] 00:58:49.509 [2024-11-26 17:55:49.985713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:49.509 [2024-11-26 17:55:50.139905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:51.415  [2024-11-26T17:55:52.678Z] Copying: 656/1024 [MB] (656 MBps) [2024-11-26T17:55:54.581Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:58:53.887 00:58:54.146 17:55:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:58:54.146 17:55:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:58:56.072 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:58:56.072 Validate MD5 checksum, iteration 2 00:58:56.072 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c98b70bf945913f9611f024c27d0fe5c 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c98b70bf945913f9611f024c27d0fe5c != \c\9\8\b\7\0\b\f\9\4\5\9\1\3\f\9\6\1\1\f\0\2\4\c\2\7\d\0\f\e\5\c ]] 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:58:56.073 17:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:58:56.073 [2024-11-26 17:55:56.500211] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:58:56.073 [2024-11-26 17:55:56.500566] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84272 ] 00:58:56.073 [2024-11-26 17:55:56.683653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:56.344 [2024-11-26 17:55:56.825087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:58.265  [2024-11-26T17:55:59.526Z] Copying: 539/1024 [MB] (539 MBps) [2024-11-26T17:56:00.904Z] Copying: 1024/1024 [MB] (average 546 MBps) 00:59:00.210 00:59:00.210 17:56:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:59:00.210 17:56:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52dc2180db3dd129ca9c7309e8bd0a99 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52dc2180db3dd129ca9c7309e8bd0a99 != \5\2\d\c\2\1\8\0\d\b\3\d\d\1\2\9\c\a\9\c\7\3\0\9\e\8\b\d\0\a\9\9 ]] 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84118 ]] 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84118 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84354 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:59:02.116 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84354 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84354 ']' 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:02.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:02.117 17:56:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:59:02.117 [2024-11-26 17:56:02.799516] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:59:02.117 [2024-11-26 17:56:02.799656] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84354 ] 00:59:02.375 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84118 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:59:02.375 [2024-11-26 17:56:02.985491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:02.633 [2024-11-26 17:56:03.107070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:03.573 [2024-11-26 17:56:04.134677] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:59:03.573 [2024-11-26 17:56:04.134757] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:59:03.834 [2024-11-26 17:56:04.283238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.283307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:59:03.834 [2024-11-26 17:56:04.283326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:59:03.834 [2024-11-26 17:56:04.283340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.283428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.283445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:59:03.834 [2024-11-26 17:56:04.283458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:59:03.834 [2024-11-26 17:56:04.283471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.283520] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:59:03.834 [2024-11-26 17:56:04.284478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:59:03.834 [2024-11-26 17:56:04.284535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.284550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:59:03.834 [2024-11-26 17:56:04.284564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:59:03.834 [2024-11-26 17:56:04.284576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.285032] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:59:03.834 [2024-11-26 17:56:04.310760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.310815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:59:03.834 [2024-11-26 17:56:04.310833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.769 ms 00:59:03.834 [2024-11-26 17:56:04.310847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.325970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.326219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:59:03.834 [2024-11-26 17:56:04.326244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:59:03.834 [2024-11-26 17:56:04.326257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.326868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.326891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:59:03.834 [2024-11-26 17:56:04.326906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:59:03.834 [2024-11-26 17:56:04.326924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.326995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.327011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:59:03.834 [2024-11-26 17:56:04.327040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:59:03.834 [2024-11-26 17:56:04.327056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.327090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.327104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:59:03.834 [2024-11-26 17:56:04.327119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:59:03.834 [2024-11-26 17:56:04.327133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.834 [2024-11-26 17:56:04.327172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:59:03.834 [2024-11-26 17:56:04.331706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.834 [2024-11-26 17:56:04.331752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:59:03.835 [2024-11-26 17:56:04.331773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.555 ms 00:59:03.835 [2024-11-26 17:56:04.331788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.835 [2024-11-26 17:56:04.331821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.835 [2024-11-26 17:56:04.331837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:59:03.835 [2024-11-26 17:56:04.331851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:59:03.835 [2024-11-26 17:56:04.331865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.835 [2024-11-26 17:56:04.331916] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:59:03.835 [2024-11-26 17:56:04.331945] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:59:03.835 [2024-11-26 17:56:04.331987] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:59:03.835 [2024-11-26 17:56:04.332014] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:59:03.835 [2024-11-26 17:56:04.332113] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:59:03.835 [2024-11-26 17:56:04.332131] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:59:03.835 [2024-11-26 17:56:04.332149] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:59:03.835 [2024-11-26 17:56:04.332167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:59:03.835 [2024-11-26 17:56:04.332183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:59:03.835 [2024-11-26 17:56:04.332197] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:59:03.835 [2024-11-26 17:56:04.332210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:59:03.835 [2024-11-26 17:56:04.332223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:59:03.835 [2024-11-26 17:56:04.332241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:59:03.835 [2024-11-26 17:56:04.332255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.835 [2024-11-26 17:56:04.332269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:59:03.835 [2024-11-26 17:56:04.332284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:59:03.835 [2024-11-26 17:56:04.332297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.835 [2024-11-26 17:56:04.332380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.835 [2024-11-26 17:56:04.332396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:59:03.835 [2024-11-26 17:56:04.332410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:59:03.835 [2024-11-26 17:56:04.332423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.835 [2024-11-26 17:56:04.332556] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:59:03.835 [2024-11-26 17:56:04.332585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:59:03.835 [2024-11-26 17:56:04.332600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:59:03.835 [2024-11-26 17:56:04.332613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:59:03.835 [2024-11-26 17:56:04.332665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:59:03.835 [2024-11-26 17:56:04.332691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:59:03.835 [2024-11-26 17:56:04.332706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:59:03.835 [2024-11-26 17:56:04.332718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:59:03.835 [2024-11-26 17:56:04.332743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:59:03.835 [2024-11-26 17:56:04.332756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:59:03.835 [2024-11-26 17:56:04.332783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:59:03.835 [2024-11-26 17:56:04.332795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:59:03.835 [2024-11-26 17:56:04.332823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:59:03.835 [2024-11-26 17:56:04.332835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.835 [2024-11-26 17:56:04.332848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:59:03.835 [2024-11-26 17:56:04.332860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:59:03.835 [2024-11-26 17:56:04.332887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:59:03.835 [2024-11-26 17:56:04.332901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:59:03.835 [2024-11-26 17:56:04.332913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:59:03.835 [2024-11-26 17:56:04.332925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:59:03.835 [2024-11-26 17:56:04.332937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:59:03.836 [2024-11-26 17:56:04.332950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:59:03.836 [2024-11-26 17:56:04.332961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:59:03.836 [2024-11-26 17:56:04.332973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:59:03.836 [2024-11-26 17:56:04.332986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:59:03.836 [2024-11-26 17:56:04.332998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:59:03.836 [2024-11-26 17:56:04.333010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:59:03.836 [2024-11-26 17:56:04.333023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:59:03.836 [2024-11-26 17:56:04.333034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:59:03.836 [2024-11-26 17:56:04.333058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:59:03.836 [2024-11-26 17:56:04.333069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:59:03.836 [2024-11-26 17:56:04.333094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:59:03.836 [2024-11-26 17:56:04.333129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:59:03.836 [2024-11-26 17:56:04.333141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333152] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:59:03.836 [2024-11-26 17:56:04.333166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:59:03.836 [2024-11-26 17:56:04.333179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:59:03.836 [2024-11-26 17:56:04.333191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:59:03.836 [2024-11-26 17:56:04.333205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:59:03.836 [2024-11-26 17:56:04.333218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:59:03.836 [2024-11-26 17:56:04.333230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:59:03.836 [2024-11-26 17:56:04.333242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:59:03.836 [2024-11-26 17:56:04.333254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:59:03.836 [2024-11-26 17:56:04.333266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:59:03.836 [2024-11-26 17:56:04.333280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:59:03.836 [2024-11-26 17:56:04.333294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:59:03.836 [2024-11-26 17:56:04.333323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:59:03.836 [2024-11-26 17:56:04.333363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:59:03.836 [2024-11-26 17:56:04.333377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:59:03.836 [2024-11-26 17:56:04.333390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:59:03.836 [2024-11-26 17:56:04.333404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:59:03.836 [2024-11-26 17:56:04.333499] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:59:03.836 [2024-11-26 17:56:04.333534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:59:03.836 [2024-11-26 17:56:04.333562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:59:03.836 [2024-11-26 17:56:04.333577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:59:03.836 [2024-11-26 17:56:04.333591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:59:03.836 [2024-11-26 17:56:04.333605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.333623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:59:03.836 [2024-11-26 17:56:04.333637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:59:03.836 [2024-11-26 17:56:04.333650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.374997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.375065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:59:03.836 [2024-11-26 17:56:04.375085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.345 ms 00:59:03.836 [2024-11-26 17:56:04.375098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.375184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.375198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:59:03.836 [2024-11-26 17:56:04.375213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:59:03.836 [2024-11-26 17:56:04.375225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.423124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.423380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:59:03.836 [2024-11-26 17:56:04.423475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.845 ms 00:59:03.836 [2024-11-26 17:56:04.423543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.423643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.423729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:59:03.836 [2024-11-26 17:56:04.423758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:59:03.836 [2024-11-26 17:56:04.423771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.423942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.423960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:59:03.836 [2024-11-26 17:56:04.423974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:59:03.836 [2024-11-26 17:56:04.423987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.424035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.424048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:59:03.836 [2024-11-26 17:56:04.424061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:59:03.836 [2024-11-26 17:56:04.424080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.445336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.445538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:59:03.836 [2024-11-26 17:56:04.445639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.262 ms 00:59:03.836 [2024-11-26 17:56:04.445681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.836 [2024-11-26 17:56:04.445889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.836 [2024-11-26 17:56:04.446013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:59:03.836 [2024-11-26 17:56:04.446058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:59:03.837 [2024-11-26 17:56:04.446094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.837 [2024-11-26 17:56:04.487534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.837 [2024-11-26 17:56:04.487716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:59:03.837 [2024-11-26 17:56:04.487811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.398 ms 00:59:03.837 [2024-11-26 17:56:04.487855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:03.837 [2024-11-26 17:56:04.503166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:03.837 [2024-11-26 17:56:04.503371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:59:03.837 [2024-11-26 17:56:04.503395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:59:03.837 [2024-11-26 17:56:04.503410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.597180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.597261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:59:04.096 [2024-11-26 17:56:04.597282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.789 ms 00:59:04.096 [2024-11-26 17:56:04.597296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.597520] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:59:04.096 [2024-11-26 17:56:04.597665] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:59:04.096 [2024-11-26 17:56:04.597798] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:59:04.096 [2024-11-26 17:56:04.597926] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:59:04.096 [2024-11-26 17:56:04.597944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.597957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:59:04.096 [2024-11-26 17:56:04.597971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:59:04.096 [2024-11-26 17:56:04.597984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.598086] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:59:04.096 [2024-11-26 17:56:04.598112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.598124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:59:04.096 [2024-11-26 17:56:04.598138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:59:04.096 [2024-11-26 17:56:04.598151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.621249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.621525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:59:04.096 [2024-11-26 17:56:04.621555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.106 ms 00:59:04.096 [2024-11-26 17:56:04.621569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.635618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.635665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:59:04.096 [2024-11-26 17:56:04.635681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:59:04.096 [2024-11-26 17:56:04.635693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.096 [2024-11-26 17:56:04.635813] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:59:04.096 [2024-11-26 17:56:04.636009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.096 [2024-11-26 17:56:04.636024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:59:04.096 [2024-11-26 17:56:04.636038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:59:04.096 [2024-11-26 17:56:04.636050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.665 [2024-11-26 17:56:05.199255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.665 [2024-11-26 17:56:05.199349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:59:04.665 [2024-11-26 17:56:05.199371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 562.987 ms 00:59:04.665 [2024-11-26 17:56:05.199384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.665 [2024-11-26 17:56:05.205257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.665 [2024-11-26 17:56:05.205309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:59:04.665 [2024-11-26 17:56:05.205326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.017 ms 00:59:04.665 [2024-11-26 17:56:05.205346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.665 [2024-11-26 17:56:05.205944] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:59:04.665 [2024-11-26 17:56:05.205969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.665 [2024-11-26 17:56:05.205982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:59:04.665 [2024-11-26 17:56:05.205996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.588 ms 00:59:04.665 [2024-11-26 17:56:05.206008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.665 [2024-11-26 17:56:05.206046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.665 [2024-11-26 17:56:05.206060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:59:04.665 [2024-11-26 17:56:05.206081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:59:04.665 [2024-11-26 17:56:05.206093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:04.665 [2024-11-26 17:56:05.206151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 571.266 ms, result 0 00:59:04.665 [2024-11-26 17:56:05.206201] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:59:04.665 [2024-11-26 17:56:05.206304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:04.665 [2024-11-26 17:56:05.206316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:59:04.665 [2024-11-26 17:56:05.206327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.104 ms 00:59:04.665 [2024-11-26 17:56:05.206339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.234 [2024-11-26 17:56:05.787858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.234 [2024-11-26 17:56:05.788126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:59:05.234 [2024-11-26 17:56:05.788273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 581.026 ms 00:59:05.234 [2024-11-26 17:56:05.788316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.234 [2024-11-26 17:56:05.794238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.234 [2024-11-26 17:56:05.794417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:59:05.234 [2024-11-26 17:56:05.794518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.319 ms 00:59:05.234 [2024-11-26 17:56:05.794563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.234 [2024-11-26 17:56:05.795125] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:59:05.234 [2024-11-26 17:56:05.795225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.234 [2024-11-26 17:56:05.795325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:59:05.234 [2024-11-26 17:56:05.795454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.597 ms 00:59:05.234 [2024-11-26 17:56:05.795506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.234 [2024-11-26 17:56:05.795583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.234 [2024-11-26 17:56:05.795692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:59:05.234 [2024-11-26 17:56:05.795733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:59:05.234 [2024-11-26 17:56:05.795770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.234 [2024-11-26 17:56:05.795892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 590.637 ms, result 0 00:59:05.234 [2024-11-26 17:56:05.796086] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:59:05.235 [2024-11-26 17:56:05.796199] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:59:05.235 [2024-11-26 17:56:05.796315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.796353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:59:05.235 [2024-11-26 17:56:05.796430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1162.414 ms 00:59:05.235 [2024-11-26 17:56:05.796468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.796589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.796807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:59:05.235 [2024-11-26 17:56:05.796850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:59:05.235 [2024-11-26 17:56:05.796886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.808605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:59:05.235 [2024-11-26 17:56:05.808890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.808942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:59:05.235 [2024-11-26 17:56:05.809029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.974 ms 00:59:05.235 [2024-11-26 17:56:05.809069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.809748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.809899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:59:05.235 [2024-11-26 17:56:05.809986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:59:05.235 [2024-11-26 17:56:05.810004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:59:05.235 [2024-11-26 17:56:05.812100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.010 ms 00:59:05.235 [2024-11-26 17:56:05.812112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:59:05.235 [2024-11-26 17:56:05.812197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:59:05.235 [2024-11-26 17:56:05.812210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:59:05.235 [2024-11-26 17:56:05.812345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:59:05.235 [2024-11-26 17:56:05.812358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:59:05.235 [2024-11-26 17:56:05.812409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:59:05.235 [2024-11-26 17:56:05.812427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812465] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:59:05.235 [2024-11-26 17:56:05.812480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:59:05.235 [2024-11-26 17:56:05.812518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:59:05.235 [2024-11-26 17:56:05.812530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.812588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:05.235 [2024-11-26 17:56:05.812603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:59:05.235 [2024-11-26 17:56:05.812615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:59:05.235 [2024-11-26 17:56:05.812632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:05.235 [2024-11-26 17:56:05.813625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1532.400 ms, result 0 00:59:05.235 [2024-11-26 17:56:05.825993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:05.235 [2024-11-26 17:56:05.841992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:59:05.235 [2024-11-26 17:56:05.851553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:59:05.235 Validate MD5 checksum, iteration 1 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:59:05.235 17:56:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:59:05.493 [2024-11-26 17:56:06.002713] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:59:05.493 [2024-11-26 17:56:06.003076] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:59:05.752 [2024-11-26 17:56:06.187396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:05.752 [2024-11-26 17:56:06.334539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:07.655  [2024-11-26T17:56:09.351Z] Copying: 541/1024 [MB] (541 MBps) [2024-11-26T17:56:10.742Z] Copying: 1024/1024 [MB] (average 540 MBps) 00:59:10.048 00:59:10.048 17:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:59:10.048 17:56:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:59:11.951 Validate MD5 checksum, iteration 2 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c98b70bf945913f9611f024c27d0fe5c 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c98b70bf945913f9611f024c27d0fe5c != \c\9\8\b\7\0\b\f\9\4\5\9\1\3\f\9\6\1\1\f\0\2\4\c\2\7\d\0\f\e\5\c ]] 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:59:11.951 17:56:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:59:11.951 [2024-11-26 17:56:12.629374] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:59:11.952 [2024-11-26 17:56:12.629843] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84463 ] 00:59:12.209 [2024-11-26 17:56:12.818468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:12.467 [2024-11-26 17:56:12.966169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:59:14.393  [2024-11-26T17:56:15.345Z] Copying: 702/1024 [MB] (702 MBps) [2024-11-26T17:56:19.580Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:59:18.886 00:59:18.886 17:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:59:18.886 17:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52dc2180db3dd129ca9c7309e8bd0a99 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52dc2180db3dd129ca9c7309e8bd0a99 != \5\2\d\c\2\1\8\0\d\b\3\d\d\1\2\9\c\a\9\c\7\3\0\9\e\8\b\d\0\a\9\9 ]] 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:59:20.266 17:56:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84354 ]] 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84354 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84354 ']' 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84354 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84354 00:59:20.525 killing process with pid 84354 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84354' 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84354 00:59:20.525 17:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84354 00:59:21.902 [2024-11-26 17:56:22.366341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:59:21.902 [2024-11-26 17:56:22.386007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.902 [2024-11-26 17:56:22.386043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:59:21.902 [2024-11-26 17:56:22.386060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:59:21.902 [2024-11-26 17:56:22.386072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.902 [2024-11-26 17:56:22.386099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:59:21.902 [2024-11-26 17:56:22.391062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.902 [2024-11-26 17:56:22.391094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:59:21.902 [2024-11-26 17:56:22.391107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.953 ms 00:59:21.902 [2024-11-26 17:56:22.391118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.902 [2024-11-26 17:56:22.391355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.902 [2024-11-26 17:56:22.391375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:59:21.902 [2024-11-26 17:56:22.391387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:59:21.902 [2024-11-26 17:56:22.391398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.902 [2024-11-26 17:56:22.392563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.902 [2024-11-26 17:56:22.392707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:59:21.902 [2024-11-26 17:56:22.392734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:59:21.902 [2024-11-26 17:56:22.392746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.902 [2024-11-26 17:56:22.393753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.902 [2024-11-26 17:56:22.393773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:59:21.902 [2024-11-26 17:56:22.393786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.963 ms 00:59:21.902 [2024-11-26 17:56:22.393797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.902 [2024-11-26 17:56:22.409278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.409309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:59:21.903 [2024-11-26 17:56:22.409329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.446 ms 00:59:21.903 [2024-11-26 17:56:22.409340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.417273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.417303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:59:21.903 [2024-11-26 17:56:22.417316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.908 ms 00:59:21.903 [2024-11-26 17:56:22.417327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.417434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.417448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:59:21.903 [2024-11-26 17:56:22.417465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:59:21.903 [2024-11-26 17:56:22.417479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.432515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.432542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:59:21.903 [2024-11-26 17:56:22.432555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.031 ms 00:59:21.903 [2024-11-26 17:56:22.432566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.447555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.447684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:59:21.903 [2024-11-26 17:56:22.447704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.977 ms 00:59:21.903 [2024-11-26 17:56:22.447716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.462232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.462370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:59:21.903 [2024-11-26 17:56:22.462390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.501 ms 00:59:21.903 [2024-11-26 17:56:22.462401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.476795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.476931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:59:21.903 [2024-11-26 17:56:22.477055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.298 ms 00:59:21.903 [2024-11-26 17:56:22.477091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.477147] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:59:21.903 [2024-11-26 17:56:22.477191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:59:21.903 [2024-11-26 17:56:22.477243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:59:21.903 [2024-11-26 17:56:22.477345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:59:21.903 [2024-11-26 17:56:22.477396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.477998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:59:21.903 [2024-11-26 17:56:22.478439] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:59:21.903 [2024-11-26 17:56:22.478471] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f7bad5d5-01fa-4dfe-bb1f-4590f7e6f6ba 00:59:21.903 [2024-11-26 17:56:22.478533] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:59:21.903 [2024-11-26 17:56:22.478564] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:59:21.903 [2024-11-26 17:56:22.478621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:59:21.903 [2024-11-26 17:56:22.478656] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:59:21.903 [2024-11-26 17:56:22.478792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:59:21.903 [2024-11-26 17:56:22.478833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:59:21.903 [2024-11-26 17:56:22.478864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:59:21.903 [2024-11-26 17:56:22.478893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:59:21.903 [2024-11-26 17:56:22.478922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:59:21.903 [2024-11-26 17:56:22.478952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.478982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:59:21.903 [2024-11-26 17:56:22.479084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.809 ms 00:59:21.903 [2024-11-26 17:56:22.479148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.500053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.500171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:59:21.903 [2024-11-26 17:56:22.500246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.882 ms 00:59:21.903 [2024-11-26 17:56:22.500290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.500903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:59:21.903 [2024-11-26 17:56:22.500998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:59:21.903 [2024-11-26 17:56:22.501064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:59:21.903 [2024-11-26 17:56:22.501098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.571161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:21.903 [2024-11-26 17:56:22.571325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:59:21.903 [2024-11-26 17:56:22.571452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:21.903 [2024-11-26 17:56:22.571506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.571572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:21.903 [2024-11-26 17:56:22.571606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:59:21.903 [2024-11-26 17:56:22.571637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:21.903 [2024-11-26 17:56:22.571667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.571857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:21.903 [2024-11-26 17:56:22.571901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:59:21.903 [2024-11-26 17:56:22.571933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:21.903 [2024-11-26 17:56:22.571963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:21.903 [2024-11-26 17:56:22.572105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:21.903 [2024-11-26 17:56:22.572137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:59:21.903 [2024-11-26 17:56:22.572168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:21.904 [2024-11-26 17:56:22.572244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.709015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.709286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:59:22.162 [2024-11-26 17:56:22.709452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.162 [2024-11-26 17:56:22.709509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.814745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.815011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:59:22.162 [2024-11-26 17:56:22.815036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.162 [2024-11-26 17:56:22.815049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.815198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.815212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:59:22.162 [2024-11-26 17:56:22.815224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.162 [2024-11-26 17:56:22.815235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.815306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.815332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:59:22.162 [2024-11-26 17:56:22.815355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.162 [2024-11-26 17:56:22.815371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.815526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.815542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:59:22.162 [2024-11-26 17:56:22.815554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.162 [2024-11-26 17:56:22.815565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.162 [2024-11-26 17:56:22.815610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.162 [2024-11-26 17:56:22.815629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:59:22.162 [2024-11-26 17:56:22.815641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.163 [2024-11-26 17:56:22.815652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.163 [2024-11-26 17:56:22.815704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.163 [2024-11-26 17:56:22.815717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:59:22.163 [2024-11-26 17:56:22.815728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.163 [2024-11-26 17:56:22.815739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.163 [2024-11-26 17:56:22.815798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:59:22.163 [2024-11-26 17:56:22.815811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:59:22.163 [2024-11-26 17:56:22.815830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:59:22.163 [2024-11-26 17:56:22.815841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:59:22.163 [2024-11-26 17:56:22.815987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 430.635 ms, result 0 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:59:24.133 Remove shared memory files 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84118 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:59:24.133 ************************************ 00:59:24.133 END TEST ftl_upgrade_shutdown 00:59:24.133 ************************************ 00:59:24.133 00:59:24.133 real 1m33.175s 00:59:24.133 user 2m7.238s 00:59:24.133 sys 0m25.295s 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:24.133 17:56:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@14 -- # killprocess 76781 00:59:24.133 Process with pid 76781 is not found 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 76781 ']' 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@958 -- # kill -0 76781 00:59:24.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76781) - No such process 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76781 is not found' 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84617 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:59:24.133 17:56:24 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84617 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@835 -- # '[' -z 84617 ']' 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:24.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:59:24.133 17:56:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:59:24.133 [2024-11-26 17:56:24.537038] Starting SPDK v25.01-pre git sha1 c86e5b182 / DPDK 24.03.0 initialization... 00:59:24.133 [2024-11-26 17:56:24.537400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84617 ] 00:59:24.133 [2024-11-26 17:56:24.721925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:59:24.390 [2024-11-26 17:56:24.871709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:59:25.326 17:56:25 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:59:25.326 17:56:25 ftl -- common/autotest_common.sh@868 -- # return 0 00:59:25.326 17:56:25 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:59:25.584 nvme0n1 00:59:25.584 17:56:26 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:59:25.584 17:56:26 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:59:25.584 17:56:26 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:59:25.842 17:56:26 ftl -- ftl/common.sh@28 -- # stores=dbd50d15-f58f-4362-ad4e-deef4293ea2f 00:59:25.842 17:56:26 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:59:25.842 17:56:26 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbd50d15-f58f-4362-ad4e-deef4293ea2f 00:59:26.101 17:56:26 ftl -- ftl/ftl.sh@23 -- # killprocess 84617 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@954 -- # '[' -z 84617 ']' 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@958 -- # kill -0 84617 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@959 -- # uname 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84617 00:59:26.101 killing process with pid 84617 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84617' 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@973 -- # kill 84617 00:59:26.101 17:56:26 ftl -- common/autotest_common.sh@978 -- # wait 84617 00:59:29.384 17:56:29 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:59:29.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:29.384 Waiting for block devices as requested 00:59:29.384 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:59:29.385 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:59:29.643 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:59:29.643 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:59:34.945 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:59:34.945 17:56:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:59:34.945 Remove shared memory files 00:59:34.945 17:56:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:59:34.945 17:56:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:59:34.945 17:56:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:59:34.945 17:56:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:59:34.945 17:56:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:59:34.945 17:56:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:59:34.945 ************************************ 00:59:34.945 END TEST ftl 00:59:34.945 ************************************ 00:59:34.945 00:59:34.945 real 11m44.600s 00:59:34.945 user 14m30.771s 00:59:34.945 sys 1m38.452s 00:59:34.945 17:56:35 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:59:34.945 17:56:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:59:34.945 17:56:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:59:34.945 17:56:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:59:34.945 17:56:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:59:34.946 17:56:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:59:34.946 17:56:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:59:34.946 17:56:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:59:34.946 17:56:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:59:34.946 17:56:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:59:34.946 17:56:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:59:34.946 17:56:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:59:34.946 17:56:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:59:34.946 17:56:35 -- common/autotest_common.sh@10 -- # set +x 00:59:34.946 17:56:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:59:34.946 17:56:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:59:34.946 17:56:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:59:34.946 17:56:35 -- common/autotest_common.sh@10 -- # set +x 00:59:37.481 INFO: APP EXITING 00:59:37.481 INFO: killing all VMs 00:59:37.481 INFO: killing vhost app 00:59:37.481 INFO: EXIT DONE 00:59:37.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:38.049 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:59:38.049 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:59:38.049 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:59:38.049 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:59:38.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:59:39.185 Cleaning 00:59:39.185 Removing: /var/run/dpdk/spdk0/config 00:59:39.185 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:59:39.185 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:59:39.185 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:59:39.185 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:59:39.185 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:59:39.185 Removing: /var/run/dpdk/spdk0/hugepage_info 00:59:39.185 Removing: /var/run/dpdk/spdk0 00:59:39.185 Removing: /var/run/dpdk/spdk_pid57547 00:59:39.185 Removing: /var/run/dpdk/spdk_pid57798 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58033 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58147 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58204 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58332 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58356 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58566 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58677 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58784 00:59:39.185 Removing: /var/run/dpdk/spdk_pid58912 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59021 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59060 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59097 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59173 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59300 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59750 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59832 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59918 00:59:39.185 Removing: /var/run/dpdk/spdk_pid59934 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60095 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60111 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60266 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60282 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60351 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60375 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60439 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60457 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60662 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60694 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60783 00:59:39.185 Removing: /var/run/dpdk/spdk_pid60979 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61075 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61117 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61569 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61673 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61787 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61846 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61870 00:59:39.185 Removing: /var/run/dpdk/spdk_pid61950 00:59:39.185 Removing: /var/run/dpdk/spdk_pid62606 00:59:39.185 Removing: /var/run/dpdk/spdk_pid62648 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63140 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63244 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63357 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63417 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63437 00:59:39.185 Removing: /var/run/dpdk/spdk_pid63468 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65361 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65515 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65524 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65536 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65581 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65585 00:59:39.185 Removing: /var/run/dpdk/spdk_pid65597 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65648 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65652 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65664 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65709 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65718 00:59:39.445 Removing: /var/run/dpdk/spdk_pid65730 00:59:39.445 Removing: /var/run/dpdk/spdk_pid67152 00:59:39.445 Removing: /var/run/dpdk/spdk_pid67271 00:59:39.445 Removing: /var/run/dpdk/spdk_pid68715 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70465 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70543 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70625 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70735 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70827 00:59:39.445 Removing: /var/run/dpdk/spdk_pid70928 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71012 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71088 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71198 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71290 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71391 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71476 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71557 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71667 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71764 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71860 00:59:39.445 Removing: /var/run/dpdk/spdk_pid71945 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72026 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72135 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72228 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72329 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72414 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72493 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72573 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72656 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72767 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72858 00:59:39.445 Removing: /var/run/dpdk/spdk_pid72959 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73039 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73119 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73193 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73273 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73382 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73477 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73622 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73923 00:59:39.445 Removing: /var/run/dpdk/spdk_pid73965 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74426 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74623 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74724 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74835 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74895 00:59:39.445 Removing: /var/run/dpdk/spdk_pid74926 00:59:39.445 Removing: /var/run/dpdk/spdk_pid75224 00:59:39.445 Removing: /var/run/dpdk/spdk_pid75301 00:59:39.445 Removing: /var/run/dpdk/spdk_pid75394 00:59:39.445 Removing: /var/run/dpdk/spdk_pid75822 00:59:39.445 Removing: /var/run/dpdk/spdk_pid75975 00:59:39.704 Removing: /var/run/dpdk/spdk_pid76781 00:59:39.704 Removing: /var/run/dpdk/spdk_pid76924 00:59:39.704 Removing: /var/run/dpdk/spdk_pid77134 00:59:39.704 Removing: /var/run/dpdk/spdk_pid77245 00:59:39.704 Removing: /var/run/dpdk/spdk_pid77632 00:59:39.704 Removing: /var/run/dpdk/spdk_pid77899 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78263 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78483 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78631 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78695 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78837 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78869 00:59:39.704 Removing: /var/run/dpdk/spdk_pid78934 00:59:39.704 Removing: /var/run/dpdk/spdk_pid79147 00:59:39.704 Removing: /var/run/dpdk/spdk_pid79400 00:59:39.704 Removing: /var/run/dpdk/spdk_pid79852 00:59:39.704 Removing: /var/run/dpdk/spdk_pid80288 00:59:39.704 Removing: /var/run/dpdk/spdk_pid80740 00:59:39.704 Removing: /var/run/dpdk/spdk_pid81267 00:59:39.704 Removing: /var/run/dpdk/spdk_pid81420 00:59:39.704 Removing: /var/run/dpdk/spdk_pid81513 00:59:39.704 Removing: /var/run/dpdk/spdk_pid82142 00:59:39.704 Removing: /var/run/dpdk/spdk_pid82218 00:59:39.704 Removing: /var/run/dpdk/spdk_pid82667 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83040 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83534 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83666 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83726 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83791 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83854 00:59:39.704 Removing: /var/run/dpdk/spdk_pid83924 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84118 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84196 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84272 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84354 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84392 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84463 00:59:39.704 Removing: /var/run/dpdk/spdk_pid84617 00:59:39.704 Clean 00:59:39.704 17:56:40 -- common/autotest_common.sh@1453 -- # return 0 00:59:39.704 17:56:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:59:39.704 17:56:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:39.704 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:59:39.963 17:56:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:59:39.963 17:56:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:59:39.963 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:59:39.963 17:56:40 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:59:39.963 17:56:40 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:59:39.963 17:56:40 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:59:39.963 17:56:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:59:39.963 17:56:40 -- spdk/autotest.sh@398 -- # hostname 00:59:39.963 17:56:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:59:40.222 geninfo: WARNING: invalid characters removed from testname! 01:00:06.803 17:57:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:09.338 17:57:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:11.262 17:57:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:13.803 17:57:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:15.709 17:57:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:18.244 17:57:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:00:20.148 17:57:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:00:20.148 17:57:20 -- spdk/autorun.sh@1 -- $ timing_finish 01:00:20.148 17:57:20 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:00:20.148 17:57:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:00:20.148 17:57:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:00:20.148 17:57:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:00:20.148 + [[ -n 5249 ]] 01:00:20.148 + sudo kill 5249 01:00:20.157 [Pipeline] } 01:00:20.172 [Pipeline] // timeout 01:00:20.178 [Pipeline] } 01:00:20.192 [Pipeline] // stage 01:00:20.197 [Pipeline] } 01:00:20.211 [Pipeline] // catchError 01:00:20.220 [Pipeline] stage 01:00:20.222 [Pipeline] { (Stop VM) 01:00:20.235 [Pipeline] sh 01:00:20.516 + vagrant halt 01:00:23.864 ==> default: Halting domain... 01:00:30.563 [Pipeline] sh 01:00:30.842 + vagrant destroy -f 01:00:34.126 ==> default: Removing domain... 01:00:34.399 [Pipeline] sh 01:00:34.687 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 01:00:34.697 [Pipeline] } 01:00:34.712 [Pipeline] // stage 01:00:34.717 [Pipeline] } 01:00:34.732 [Pipeline] // dir 01:00:34.738 [Pipeline] } 01:00:34.754 [Pipeline] // wrap 01:00:34.760 [Pipeline] } 01:00:34.773 [Pipeline] // catchError 01:00:34.782 [Pipeline] stage 01:00:34.785 [Pipeline] { (Epilogue) 01:00:34.797 [Pipeline] sh 01:00:35.085 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:00:40.425 [Pipeline] catchError 01:00:40.427 [Pipeline] { 01:00:40.440 [Pipeline] sh 01:00:40.724 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:00:40.724 Artifacts sizes are good 01:00:40.735 [Pipeline] } 01:00:40.749 [Pipeline] // catchError 01:00:40.760 [Pipeline] archiveArtifacts 01:00:40.767 Archiving artifacts 01:00:40.874 [Pipeline] cleanWs 01:00:40.886 [WS-CLEANUP] Deleting project workspace... 01:00:40.886 [WS-CLEANUP] Deferred wipeout is used... 01:00:40.893 [WS-CLEANUP] done 01:00:40.895 [Pipeline] } 01:00:40.912 [Pipeline] // stage 01:00:40.917 [Pipeline] } 01:00:40.932 [Pipeline] // node 01:00:40.937 [Pipeline] End of Pipeline 01:00:40.988 Finished: SUCCESS